Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a “very serious issue,” and lacking “any perspective at all.”
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.
This makes sense and it changed my mind, rudeness should stay on Lesswrong where Bayes Points rule the scene. Also, at the time I’m leaving this comment, the distribution of support on this page has shifted such that the ratio of opposition to the deal to uncertainty about the deal is less terrible; it was pretty bad when I wrote this comment.
I still think that people are too harsh on Anthropic, and that has consequences. I was definitely concerned as well when I first found out about this; Amazon plays hardball, and is probably much more capable of doing cultural investigations and appearing harmless than Anthropic thinks. Nickliang’s comment might have been far more carefully worded than I thought. But at the same time, if Dustin opposes the villainization of Anthropic and Yudkowsky is silent on the matter, that seems like mobbing Anthropic is the wrong move with serious real-life consequences.
Downvoted this because I think that in general, you should have a very high bar for telling people that they are overconfident, incompetent, narrow-minded, aggressive, contributing to a “very serious issue,” and lacking “any perspective at all.”
This kind of comment predictably chills discourse, and I think that discursive norms within AI safety are already a bit sketch: these issues are hard to understand, and so the barrier to engaging at all is high, and the barrier to disagreeing with famous AI safety people is much, much higher. Telling people that their takes are incompetent (etc) will likely lead to fewer bad takes, but, more importantly, risks leading to an Emperor Has No Clothes phenomenon. Bad takes are easy to ignore, but echo chambers are hard to escape from.
This makes sense and it changed my mind, rudeness should stay on Lesswrong where Bayes Points rule the scene. Also, at the time I’m leaving this comment, the distribution of support on this page has shifted such that the ratio of opposition to the deal to uncertainty about the deal is less terrible; it was pretty bad when I wrote this comment.
I still think that people are too harsh on Anthropic, and that has consequences. I was definitely concerned as well when I first found out about this; Amazon plays hardball, and is probably much more capable of doing cultural investigations and appearing harmless than Anthropic thinks. Nickliang’s comment might have been far more carefully worded than I thought. But at the same time, if Dustin opposes the villainization of Anthropic and Yudkowsky is silent on the matter, that seems like mobbing Anthropic is the wrong move with serious real-life consequences.