I agree with much of your point that a lot of EA criticism has little effect because it seemingly doesn’t touch the underlying model being criticized. This is an underrated and quite important point in my opinion.
Something I find suboptimal about your post is that its written as if the failure in communication here is entirely, or almost entirely, the fault of people criticizing EA. Almost all of the suggestions are about what people criticizing EA should do better, and the only suggestion about what any EA entity can do better is the vaguest one of them all. The dialogue shows no hint that any EA entity could do anything better. I find this idea to be both incorrect and unproductive.
Here’s an example in this direction. Your post suggests:
People critiquing EA should do more ideological turing tests. In particular, they should recognize that a sizable fraction of EA leadership is currently concerned that AI is “somewhat likely” to “extremely likely” to lead to the end of human civilization in the next 100 years (often <50 years).
And why is it that people don’t understand this? I’ll suggest that not unrelated to things like how the big EA longtermist book, written by the foremost EA public representative and widely promoted to the public, doesn’t talk about how a prominent priority of EA longtermists is medium term AI doom timelines, as you yourself gestured towards.
I agree with much of your point that a lot of EA criticism has little effect because it seemingly doesn’t touch the underlying model being criticized. This is an underrated and quite important point in my opinion.
Something I find suboptimal about your post is that its written as if the failure in communication here is entirely, or almost entirely, the fault of people criticizing EA. Almost all of the suggestions are about what people criticizing EA should do better, and the only suggestion about what any EA entity can do better is the vaguest one of them all. The dialogue shows no hint that any EA entity could do anything better. I find this idea to be both incorrect and unproductive.
Here’s an example in this direction. Your post suggests:
And why is it that people don’t understand this? I’ll suggest that not unrelated to things like how the big EA longtermist book, written by the foremost EA public representative and widely promoted to the public, doesn’t talk about how a prominent priority of EA longtermists is medium term AI doom timelines, as you yourself gestured towards.