Upvoted. No need to apologize because your criticism was valid on the basis of how I presented my case, which didn’t leave my main arguments particularly clear.
I still think the piece was a net benefit as written, and didn’t harm EA, or Remmelt himself, to any degree that we should be especially worried about.
Yeah, I haven’t read the comments on Scott’s follow-up post yet because there were not many when I first noticed the post. I’m guessing there are more comments now and some themes among the reactions that may serve as an indicator as to the ways Scott’s post have led to more or less accurate understandings of EA.
I expect its ultimate impact will be closer to neutral than significantly positive or negative. I’m guessing that any downside of people put off by EA would only be a few people anyway. Public communication on this subject 3+ meta levels in might be intellectually interesting but in practice it’s too abstract to be high-stakes for EA.
Higher stakes, like the perception of a controversial topic in AI safety/alignment among social or professional networks adjacent to EA, might be a risk worthier of considering for a post like this. Scott himself has for years handled controversies in AI alignment like this better than most in the community. I’m more concerned about those in the community who aren’t as deft as Scott not being skillful enough to avoid the mistakes in public communication about EA he is relatively competent at avoiding.
many of the problems with doing this that exist in at least some cases are self-inflicted by current EA norms I would rather challenge instead.
I don’t have as strong a sense yet as to what exactly are the main causes of this but in general I get the same impression.
Interestingly, we might not disagree on very much after all. I probably did too much pattern matching between your writing and broader impressions I get about EA media strategies. Still, glad we got to chat it out!
Upvoted. No need to apologize because your criticism was valid on the basis of how I presented my case, which didn’t leave my main arguments particularly clear.
Yeah, I haven’t read the comments on Scott’s follow-up post yet because there were not many when I first noticed the post. I’m guessing there are more comments now and some themes among the reactions that may serve as an indicator as to the ways Scott’s post have led to more or less accurate understandings of EA.
I expect its ultimate impact will be closer to neutral than significantly positive or negative. I’m guessing that any downside of people put off by EA would only be a few people anyway. Public communication on this subject 3+ meta levels in might be intellectually interesting but in practice it’s too abstract to be high-stakes for EA.
Higher stakes, like the perception of a controversial topic in AI safety/alignment among social or professional networks adjacent to EA, might be a risk worthier of considering for a post like this. Scott himself has for years handled controversies in AI alignment like this better than most in the community. I’m more concerned about those in the community who aren’t as deft as Scott not being skillful enough to avoid the mistakes in public communication about EA he is relatively competent at avoiding.
I don’t have as strong a sense yet as to what exactly are the main causes of this but in general I get the same impression.
Interestingly, we might not disagree on very much after all. I probably did too much pattern matching between your writing and broader impressions I get about EA media strategies. Still, glad we got to chat it out!
Yeah, me too! Thanks for the conversation!