I really appreciate this mod comment. I was seriously wracking my brain, trying to think of how to put into words exactly what was wrong with this post, and nothing I pulled out was anywhere near as concise or precise as this comment. This was done extremely well, something that I’m able to recognize after the fact but nowhere near able to do myself.
I still wish it had come sooner. It’s also worth noting that this post made a really serious effort to optimize for maximizing damage to the reputation to at least one of the major Schelling points in the Rationality community (the Lesswrong Sequences), which many people in EA also use for upskilling in preparation for high-EV projects. It’s not clear how many people read the arguments at the beginning and assumed they were true, rather than that the author had insulated themself from accountability. Rather than hosting bad-faith attacks on other communities for more than 20 hours and following up with heavy-handed actions, like moving to personal blog, that couldn’t have been done sooner than ~20 hours, it would make more sense to take lighter actions sooner, such as just posting a superior-quality disclaimer warning very early on in the process (and, again, this comment nailed the situation in a really effective way). This will maximize the number of people who can make informed decisions about the content itself, rather than just go wherever it was designed to take them.
I’m not sure how that something like this would be, as I’m not a moderator and I don’t know what the base rates are for moderator capability, as well as other factors that I’m not aware of; I know that this situation was ultimately handled well but possibly could have been done better.
Yudkowsky made a long reply, but it also contained this point:
As the entire post violates basic rules of epistemic conduct by opening with a series of not-yet-supported personal attacks, I will not be responding to the rest in detail. I’m sad about how anything containing such an egregious violation of basic epistemic conduct got this upvoted, and wonder about sockpuppet accounts or alternatively a downfall of EA.
Although forum cybersecurity is very important since we don’t know what kinds of people will be the adversaries in the future, I don’t think this is as much of an indictment or a dissappointment as Yudkowsky thinks. As a thought experiment, if EA were to grow by a factor of two each year, then that would mean, at any given time, a disproportionately large proportion of people in EA would have only had their first exposure to EA ideas less than 6 months ago. However, it would still mean that it is even more important to acknowledge that, at any given time, many newer users aren’t yet ready to handle complex bad-faith arguments and mind games that inevitably end up getting cooked up by an intelligent disgruntled person here or there; this would happen even if EA is doing everything right, and continuously getting better every day, by expanding and bringing in many new people, which is already hard enough as is in the current ambiently-hostile environment.
I’d disagree with the notion that “this post made a really serious effort to optimize for maximizing damage to the reputation to at least one of the major Schelling points in the Rationality community.” The thing I was optimizing for was getting people to be more skeptical about Eliezer’s views, not ruining his career or reputation. In fact, as I said in the article, I think he often has interesting, clever, and unique insights and has made the world a better place.
See also my reply to Eliezer. In short, if you’re writing a post arguing for why we should trust someone less, I don’t know why you can’t start out with the broad claim and then give the reasons. Eliezer doesn’t defend that practice—he just asserts that it’s basic rationality.
Yeah it seems pretty obvious to me that there are far worse things you could’ve said if you wanted to optimize for reputational damage, assuming above 75th percentile creativity and/or ruthlessness.
I really appreciate this mod comment. I was seriously wracking my brain, trying to think of how to put into words exactly what was wrong with this post, and nothing I pulled out was anywhere near as concise or precise as this comment. This was done extremely well, something that I’m able to recognize after the fact but nowhere near able to do myself.
I still wish it had come sooner. It’s also worth noting that this post made a really serious effort to optimize for maximizing damage to the reputation to at least one of the major Schelling points in the Rationality community (the Lesswrong Sequences), which many people in EA also use for upskilling in preparation for high-EV projects. It’s not clear how many people read the arguments at the beginning and assumed they were true, rather than that the author had insulated themself from accountability. Rather than hosting bad-faith attacks on other communities for more than 20 hours and following up with heavy-handed actions, like moving to personal blog, that couldn’t have been done sooner than ~20 hours, it would make more sense to take lighter actions sooner, such as just posting a superior-quality disclaimer warning very early on in the process (and, again, this comment nailed the situation in a really effective way). This will maximize the number of people who can make informed decisions about the content itself, rather than just go wherever it was designed to take them.
I’m not sure how that something like this would be, as I’m not a moderator and I don’t know what the base rates are for moderator capability, as well as other factors that I’m not aware of; I know that this situation was ultimately handled well but possibly could have been done better.
Yudkowsky made a long reply, but it also contained this point:
Although forum cybersecurity is very important since we don’t know what kinds of people will be the adversaries in the future, I don’t think this is as much of an indictment or a dissappointment as Yudkowsky thinks. As a thought experiment, if EA were to grow by a factor of two each year, then that would mean, at any given time, a disproportionately large proportion of people in EA would have only had their first exposure to EA ideas less than 6 months ago. However, it would still mean that it is even more important to acknowledge that, at any given time, many newer users aren’t yet ready to handle complex bad-faith arguments and mind games that inevitably end up getting cooked up by an intelligent disgruntled person here or there; this would happen even if EA is doing everything right, and continuously getting better every day, by expanding and bringing in many new people, which is already hard enough as is in the current ambiently-hostile environment.
I’d disagree with the notion that “this post made a really serious effort to optimize for maximizing damage to the reputation to at least one of the major Schelling points in the Rationality community.” The thing I was optimizing for was getting people to be more skeptical about Eliezer’s views, not ruining his career or reputation. In fact, as I said in the article, I think he often has interesting, clever, and unique insights and has made the world a better place.
See also my reply to Eliezer. In short, if you’re writing a post arguing for why we should trust someone less, I don’t know why you can’t start out with the broad claim and then give the reasons. Eliezer doesn’t defend that practice—he just asserts that it’s basic rationality.
Yeah it seems pretty obvious to me that there are far worse things you could’ve said if you wanted to optimize for reputational damage, assuming above 75th percentile creativity and/or ruthlessness.