I’m a recruiter and ops generalist at GiveWell. Previously, I taught high school math at a charter school in Tennessee. I learned about EA in 2015 when I accidentally stumbled on Scott Alexander’s blog.
Any writing on this account is personal (not GiveWell opinion) unless clearly stated otherwise.
To extend your comment about lower standards for EA criticism, I thought the remainder of Venkatasubramanian’s quote was quite interesting:
The EA community has spilled heaps of words on every single one of these issues, but the article nevertheless portrays the EA community as if it is pushing frivolous, ill-considered ideas instead of supporting the Real, Serious concerns held by Thoughtful and Reasonable people.
It’s interesting to consider why the portrayal is so off-base, because a few minutes of Googling and reading EA content could have disabused the reporter of the notion that EA has an unserious, careless bent toward long-term AI risk.
On the other hand, if you Google “effective altruism AI,” the first result is this Wired article with a very negative take on EA and AI. There are a few top-level results from 80K and EA.org, but most of the first-page results are articles that basically say, “So there’s this weird group of people who care a lot about AI...”, with varying but mostly negative levels of sympathy.
I guess it could be the case that the reporter or the outlet or both have a level of antipathy for EA that precludes due diligence. Or they could be attempting a basic due diligence but are mainly reading sources that have a very negative take on EA.
Either way, EA’s public image (specifically regarding AI) is not ideal. Your suggestion about making a greater effort to visibly signal cooperativeness might be a really good one!