EA is just a few months out from a massive scandal caused in part by socially enforced artificial consensus (FTX), but judging by this post nothing has been learned and the “shut up and just be nice to everyone else on the team” culture is back again, even when truth gets sacrificed on the process. No thinks HLI is stealing billions of dollars of course, but the charge that they keep quasi-deliberately stacking the deck in StrongMinds’ favour is far from outrageous and should be discussed honestly and straightforwardly.
JWS’ quick take has often been in negative agreevote territory and is +3 at this writing. Meanwhile, the comments of the lead HLI critic suggesting potential bad faith have seen consistent patterns of high upvote / agreevote. I don’t see much evidence of “shut up and just be nice to everyone else on the team” culture here.
I don’t think the Forum’s reaction to the HLI post has been “shut up and just be nice to everyone else on the team”, as Jason’s response suggested.
I don’t think mine suggests that either! In fact, my first bullet point has a similar sceptical prior to what you express in this comment[1] I also literally say “holding charity evaluators to account is important to both the EA mission and EAs identity”, and point that I don’t want to sacrifice epistemic rigour. In fact, one of my main points is that people—even those disagreeing with HLI, are shutting up too much! I think disagreement without explanation is bad, and I salute the thorough critics on that post who have made their reasoning for putting HLI in ‘epistemic probation’ clear.
I don’t suggest ‘sacrificing the truth’. My position is that the truth on StrongMind’s efficacy is hard to get a strong signal on, and therefore HLI should have been more modest early on their history, instead of framing it as the most effective way to donate.
As for the question of whether HLI were “quasi-deliberately stacking the deck”, well I was quite open that I think I am confused on where the truth is, and find it difficult to adjudicate what the correct takeway should be.
I don’t think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I’m not taking the position you think I am.
EA is just a few months out from a massive scandal caused in part by socially enforced artificial consensus (FTX), but judging by this post nothing has been learned and the “shut up and just be nice to everyone else on the team” culture is back again, even when truth gets sacrificed on the process. No thinks HLI is stealing billions of dollars of course, but the charge that they keep quasi-deliberately stacking the deck in StrongMinds’ favour is far from outrageous and should be discussed honestly and straightforwardly.
JWS’ quick take has often been in negative agreevote territory and is +3 at this writing. Meanwhile, the comments of the lead HLI critic suggesting potential bad faith have seen consistent patterns of high upvote / agreevote. I don’t see much evidence of “shut up and just be nice to everyone else on the team” culture here.
Hey Sol, some thoughts on this comment:
I don’t think the Forum’s reaction to the HLI post has been “shut up and just be nice to everyone else on the team”, as Jason’s response suggested.
I don’t think mine suggests that either! In fact, my first bullet point has a similar sceptical prior to what you express in this comment[1] I also literally say “holding charity evaluators to account is important to both the EA mission and EAs identity”, and point that I don’t want to sacrifice epistemic rigour. In fact, one of my main points is that people—even those disagreeing with HLI, are shutting up too much! I think disagreement without explanation is bad, and I salute the thorough critics on that post who have made their reasoning for putting HLI in ‘epistemic probation’ clear.
I don’t suggest ‘sacrificing the truth’. My position is that the truth on StrongMind’s efficacy is hard to get a strong signal on, and therefore HLI should have been more modest early on their history, instead of framing it as the most effective way to donate.
As for the question of whether HLI were “quasi-deliberately stacking the deck”, well I was quite open that I think I am confused on where the truth is, and find it difficult to adjudicate what the correct takeway should be.
I don’t think we really disagree that much, and I definitely agree that the HLI discussion should proceed transparently and EA has a lot to learn from the last year, including FTX. I think if you maybe re-read my Quick Take, I’m not taking the position you think I am.
That’s my interpretation of course, please correct me if I’ve misunderstood