3) I’ve seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:
I also have criticisms about EAs being overconfident and acting as if they know way more than they do about a wide variety of things, but my criticisms are very different from [Holden’s criticisms]. For example, I’m super unimpressed that so many EAs didn’t know that GiveWell thinks that deworming has a relatively low probability of very high impact. I’m also unimpressed by how many people are incredibly confident that animals aren’t morally relevant despite knowing very little about the topic.
Do you think this has improved at all? And what are the current things that you are annoyed most EAs do not seem to know or engage with?
I no longer feel annoyed about this. I’m not quite sure why. Part of it is probably that I’m a lot more sympathetic when EAs don’t know things about AI safety than global poverty, because learning about AI safety seems much harder, and I think I hear relatively more discussion of AI safety now compared to three years ago.
One hypothesis is that 80000 Hours has made various EA ideas more accessible and well-known within the community, via their podcast and maybe their articles.
3) I’ve seen several places where you criticize fellow EAs for their lack of engagement or critical thinking. For example, three years ago, you wrote:
Do you think this has improved at all? And what are the current things that you are annoyed most EAs do not seem to know or engage with?
I no longer feel annoyed about this. I’m not quite sure why. Part of it is probably that I’m a lot more sympathetic when EAs don’t know things about AI safety than global poverty, because learning about AI safety seems much harder, and I think I hear relatively more discussion of AI safety now compared to three years ago.
One hypothesis is that 80000 Hours has made various EA ideas more accessible and well-known within the community, via their podcast and maybe their articles.