My gut feeling is that LessWrong is cringe and the heavy link to the Effective Altruism forum is making the forum cringe.
Trying to explain this feeling Iâd say some features I donât like are:
Ignoring emotional responses and optics in favour of pure open dialogue. Feels very New Atheist.
The long pieces of independent research that are extremely difficult to independently verify and which often defer to other pieces of difficult-to-verify independent research.
Heavy use of expected value calculations rather than emphasising the uncertainty and cluelessness around a lot of our numbers.
The more-karma-more-votes system that encourages an echo chamber.
I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.
Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-philâs opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.
It was originally EAs who used such explicit expected value calculations during Givewell periods, and I donât think Iâve ever seen an EV calculation done on LessWrong.
I think the more-karma-more-votes system is mostly good, but not perfect. In particular, it seems likely to reduce the impact of posts which are popular outside EA but not particularly relevant to EAs, a problem many subreddits have.
I strong downvoted this because I donât like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/âhigh status, and EA already has enough of that noise.
I think I like this norm where people say what they voted and why (not always, but on the margin). Not 100% it would be better for more people to do, but I like what you did here.
I think some statements or ideas here might be overly divisive or a little simplistic.
Ignoring emotional responses and optics in favour of pure open dialogue. Feels very New Atheist.
The long pieces of independent research that are extremely difficult to independently verify and which often defer to other pieces of difficult-to-verify independent research.
Heavy use of expected value calculations rather than emphasising the uncertainty and cluelessness around a lot of our numbers.
The more-karma-more-votes system that encourages an echo chamber.
For counterpoints, if you go look at respected communities of say, medical professionals (surgeons), as well as top athletes/âmilitary/âlawyers, they effectively do all of the things you criticize. All of these communities have complex systems of beliefs, use jargon, in-groups and have imperfect levels of intellectual honesty. Often, decisions and judgement are made by senior leaders opaquely, so that a formal âexpected value calculationâ would be transparent in comparison. Despite all this, these groups are respected, trusted and often very effective in their domains.
Similarly, in EA, in order to make progress, authority needs to be used. EA canât avoid internal authority, as proven by EAâs history and other movements. Instead, we have to do this with intention, that is, there needs to be some senior people, who are correct and virtuous, who need to be trusted.
The problem is that LessWrong has in the past, monopolized implicit positions of authority, using a particular style of discourse and rhetoric, which allows it masks what it is doing. As it does this, the fact that it is a distinct entity, actively seeking resources from EA, is mostly ignored.
Getting to the object level on LessWrong: what could be great is a true focus on virtue/âârationalityâ/âself improvement. In theory, LessWrong and rationality is fantastic, and there should be more of this. The problem is that without true ability, bad implementation and co-option occurs. For example, one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him. This is both a moral mistake and a strong smell that something is wrong. The fact this person and their (well telegraphed) issues persisted, casts doubt on the LessWrong intellectual aesthetics, as well and their choice to steer numerous programs to one cause (which they decided on long ago).
To the bigger question âRE: fixing EAâ. Some people are working on this, but the squalor and instability of the EA ecosystem is hampering this and shielding itself from reform. For one example, someone I know was invested in a FTX FF worldview submission, until the contest was cancelled in November for almost the worst reason possible. In the aftermath, this person needs to go and handhold a long list of computer scientists /â other leaders. That is, they need to defend the very AI-safety worldview caused by this disaster, after their project was destroyed by the same disaster. While this is going on, there are many side considerations. For example, despite desperately wanting to communicate and correct EA, they need worry about any public communication being seized and used by âZoe Cramerâ-style critiques, which have been enormously empowered. These critiques are misplaced at best, but writing why âdemocracyâ and âvotingâ is bad, looks ridiculous, given the issues in the paragraph above.
The solution space is small and the people who think they can solve this are going to have visible actions/âpersonas that will look different than other people.
one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him.
I like that you were open about your gut feeling and thinking that something is cringe. I generally donât think thatâs a good reason to do or not do things, but it might track important things, and you fleshed yours out.
My gut feeling is that LessWrong is cringe and the heavy link to the Effective Altruism forum is making the forum cringe.
Trying to explain this feeling Iâd say some features I donât like are:
Ignoring emotional responses and optics in favour of pure open dialogue. Feels very New Atheist.
The long pieces of independent research that are extremely difficult to independently verify and which often defer to other pieces of difficult-to-verify independent research.
Heavy use of expected value calculations rather than emphasising the uncertainty and cluelessness around a lot of our numbers.
The more-karma-more-votes system that encourages an echo chamber.
I disagree-voted.
I think pure open dialogue is often good for communities. You will find evidence for this if you look at most any social movement, the FTX fiasco, and immoral mazes.
Most long pieces of independent research that I see are made by open-phil, and I see far more EAs deferring to open-philâs opinion on a variety of subjects than Lesswrongers. Examples that come to mind from you would be helpful.
It was originally EAs who used such explicit expected value calculations during Givewell periods, and I donât think Iâve ever seen an EV calculation done on LessWrong.
I think the more-karma-more-votes system is mostly good, but not perfect. In particular, it seems likely to reduce the impact of posts which are popular outside EA but not particularly relevant to EAs, a problem many subreddits have.
I strong downvoted this because I donât like online discussions that devolve into labeling things as cringe or based. I usually replace such words with low/âhigh status, and EA already has enough of that noise.
I think I like this norm where people say what they voted and why (not always, but on the margin). Not 100% it would be better for more people to do, but I like what you did here.
I think some statements or ideas here might be overly divisive or a little simplistic.
For counterpoints, if you go look at respected communities of say, medical professionals (surgeons), as well as top athletes/âmilitary/âlawyers, they effectively do all of the things you criticize. All of these communities have complex systems of beliefs, use jargon, in-groups and have imperfect levels of intellectual honesty. Often, decisions and judgement are made by senior leaders opaquely, so that a formal âexpected value calculationâ would be transparent in comparison. Despite all this, these groups are respected, trusted and often very effective in their domains.
Similarly, in EA, in order to make progress, authority needs to be used. EA canât avoid internal authority, as proven by EAâs history and other movements. Instead, we have to do this with intention, that is, there needs to be some senior people, who are correct and virtuous, who need to be trusted.
The problem is that LessWrong has in the past, monopolized implicit positions of authority, using a particular style of discourse and rhetoric, which allows it masks what it is doing. As it does this, the fact that it is a distinct entity, actively seeking resources from EA, is mostly ignored.
Getting to the object level on LessWrong: what could be great is a true focus on virtue/âârationalityâ/âself improvement. In theory, LessWrong and rationality is fantastic, and there should be more of this. The problem is that without true ability, bad implementation and co-option occurs. For example, one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him. This is both a moral mistake and a strong smell that something is wrong. The fact this person and their (well telegraphed) issues persisted, casts doubt on the LessWrong intellectual aesthetics, as well and their choice to steer numerous programs to one cause (which they decided on long ago).
To the bigger question âRE: fixing EAâ. Some people are working on this, but the squalor and instability of the EA ecosystem is hampering this and shielding itself from reform. For one example, someone I know was invested in a FTX FF worldview submission, until the contest was cancelled in November for almost the worst reason possible. In the aftermath, this person needs to go and handhold a long list of computer scientists /â other leaders. That is, they need to defend the very AI-safety worldview caused by this disaster, after their project was destroyed by the same disaster. While this is going on, there are many side considerations. For example, despite desperately wanting to communicate and correct EA, they need worry about any public communication being seized and used by âZoe Cramerâ-style critiques, which have been enormously empowered. These critiques are misplaced at best, but writing why âdemocracyâ and âvotingâ is bad, looks ridiculous, given the issues in the paragraph above.
The solution space is small and the people who think they can solve this are going to have visible actions/âpersonas that will look different than other people.
!?
I like that you were open about your gut feeling and thinking that something is cringe. I generally donât think thatâs a good reason to do or not do things, but it might track important things, and you fleshed yours out.