I think some statements or ideas here might be overly divisive or a little simplistic.
Ignoring emotional responses and optics in favour of pure open dialogue. Feels very New Atheist.
The long pieces of independent research that are extremely difficult to independently verify and which often defer to other pieces of difficult-to-verify independent research.
Heavy use of expected value calculations rather than emphasising the uncertainty and cluelessness around a lot of our numbers.
The more-karma-more-votes system that encourages an echo chamber.
For counterpoints, if you go look at respected communities of say, medical professionals (surgeons), as well as top athletes/military/lawyers, they effectively do all of the things you criticize. All of these communities have complex systems of beliefs, use jargon, in-groups and have imperfect levels of intellectual honesty. Often, decisions and judgement are made by senior leaders opaquely, so that a formal “expected value calculation” would be transparent in comparison. Despite all this, these groups are respected, trusted and often very effective in their domains.
Similarly, in EA, in order to make progress, authority needs to be used. EA can’t avoid internal authority, as proven by EA’s history and other movements. Instead, we have to do this with intention, that is, there needs to be some senior people, who are correct and virtuous, who need to be trusted.
The problem is that LessWrong has in the past, monopolized implicit positions of authority, using a particular style of discourse and rhetoric, which allows it masks what it is doing. As it does this, the fact that it is a distinct entity, actively seeking resources from EA, is mostly ignored.
Getting to the object level on LessWrong: what could be great is a true focus on virtue/“rationality”/self improvement. In theory, LessWrong and rationality is fantastic, and there should be more of this. The problem is that without true ability, bad implementation and co-option occurs. For example, one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him. This is both a moral mistake and a strong smell that something is wrong. The fact this person and their (well telegraphed) issues persisted, casts doubt on the LessWrong intellectual aesthetics, as well and their choice to steer numerous programs to one cause (which they decided on long ago).
To the bigger question “RE: fixing EA”. Some people are working on this, but the squalor and instability of the EA ecosystem is hampering this and shielding itself from reform. For one example, someone I know was invested in a FTX FF worldview submission, until the contest was cancelled in November for almost the worst reason possible. In the aftermath, this person needs to go and handhold a long list of computer scientists / other leaders. That is, they need to defend the very AI-safety worldview caused by this disaster, after their project was destroyed by the same disaster. While this is going on, there are many side considerations. For example, despite desperately wanting to communicate and correct EA, they need worry about any public communication being seized and used by “Zoe Cramer”-style critiques, which have been enormously empowered. These critiques are misplaced at best, but writing why “democracy” and “voting” is bad, looks ridiculous, given the issues in the paragraph above.
The solution space is small and the people who think they can solve this are going to have visible actions/personas that will look different than other people.
one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him.
I think some statements or ideas here might be overly divisive or a little simplistic.
For counterpoints, if you go look at respected communities of say, medical professionals (surgeons), as well as top athletes/military/lawyers, they effectively do all of the things you criticize. All of these communities have complex systems of beliefs, use jargon, in-groups and have imperfect levels of intellectual honesty. Often, decisions and judgement are made by senior leaders opaquely, so that a formal “expected value calculation” would be transparent in comparison. Despite all this, these groups are respected, trusted and often very effective in their domains.
Similarly, in EA, in order to make progress, authority needs to be used. EA can’t avoid internal authority, as proven by EA’s history and other movements. Instead, we have to do this with intention, that is, there needs to be some senior people, who are correct and virtuous, who need to be trusted.
The problem is that LessWrong has in the past, monopolized implicit positions of authority, using a particular style of discourse and rhetoric, which allows it masks what it is doing. As it does this, the fact that it is a distinct entity, actively seeking resources from EA, is mostly ignored.
Getting to the object level on LessWrong: what could be great is a true focus on virtue/“rationality”/self improvement. In theory, LessWrong and rationality is fantastic, and there should be more of this. The problem is that without true ability, bad implementation and co-option occurs. For example, one person, with narcissistic personality disorder, overbearingly dominated discourse on LessWrong and appropriated EA identity elsewhere. Only recently, when it has become egregiously clear that this person has negative value, has the community done much to counter him. This is both a moral mistake and a strong smell that something is wrong. The fact this person and their (well telegraphed) issues persisted, casts doubt on the LessWrong intellectual aesthetics, as well and their choice to steer numerous programs to one cause (which they decided on long ago).
To the bigger question “RE: fixing EA”. Some people are working on this, but the squalor and instability of the EA ecosystem is hampering this and shielding itself from reform. For one example, someone I know was invested in a FTX FF worldview submission, until the contest was cancelled in November for almost the worst reason possible. In the aftermath, this person needs to go and handhold a long list of computer scientists / other leaders. That is, they need to defend the very AI-safety worldview caused by this disaster, after their project was destroyed by the same disaster. While this is going on, there are many side considerations. For example, despite desperately wanting to communicate and correct EA, they need worry about any public communication being seized and used by “Zoe Cramer”-style critiques, which have been enormously empowered. These critiques are misplaced at best, but writing why “democracy” and “voting” is bad, looks ridiculous, given the issues in the paragraph above.
The solution space is small and the people who think they can solve this are going to have visible actions/personas that will look different than other people.
!?