As others noted, the post also made a bunch of specific claims that others can disagree with as opposed to saying vague things or hedging a lot, which I also appreciate (see also epistemic legibility).
Thank you for acknowledging this and emphasizing the specific claims being made. I’m guessing you didn’t mean to cast aspersions through a euphemism. I’d respect you not being as explicit about it if that is part of what you meant here.
For my part, though, I think you’re understating how much of a problem those other posts are, so I feel obliged to emphasize how the vagueness and hedging in some of those other posts has, wittingly or not, serving to spread hazardous misinformation. To be specific, here’s an excerpt from this other comment I made raising the same concern:
Others who’ve tried to get across the same point [Leopold is] making have, instead of explaining their disagreements, have generally alleged almost everyone else in entire field of AI alignment are literally insane. [...] It counts as someone making a bold, senseless attempt to, arguably, dehumanize hundreds of their peers.
This isn’t just a negligible error from somebody recognized as part of a hyperbolic fringe in AI safety/alignment community. It’s direly counterproductive when it comes from leading rationalists, like Eliezer Yudkowsky and Oliver Habryka, who wield great influence in their own right, and are taken very seriously by hundreds of other people.
Thank you for acknowledging this and emphasizing the specific claims being made. I’m guessing you didn’t mean to cast aspersions through a euphemism. I’d respect you not being as explicit about it if that is part of what you meant here.
For my part, though, I think you’re understating how much of a problem those other posts are, so I feel obliged to emphasize how the vagueness and hedging in some of those other posts has, wittingly or not, serving to spread hazardous misinformation. To be specific, here’s an excerpt from this other comment I made raising the same concern: