Regarding 1, I agree that it’s unlikely that your post directly resulted in any large action changes. However, I would be surprised if it didn’t have small effects on many people, including non-EAs or non-core EAs socially distant from you and other core members, and helped them make better decisions. This looks more like many people making small updates rather than a few big actions. To use the animal example, the effect is likely closer to a lot of people becoming a bit warmer to animal welfare and factory farming mattering rather than a few people making big dietary changes. While sometimes this may lead to no practical effect (e.g. the uptick in sympathy for animal welfare dies down after a few months without leading to any dietary or other changes), in expectation the impact is positive.
Regarding 3, that’s not exactly what I meant. The post highlights big, persistent problems with EA reasoning and efforts due to structural factors. No single post can solve these problems. But I also think that progress on these issues is possible over time. One way is through increasing common knowledge of the problem—which I think your post does a great job of making progress on.
Regarding 1, I agree that it’s unlikely that your post directly resulted in any large action changes. However, I would be surprised if it didn’t have small effects on many people, including non-EAs or non-core EAs socially distant from you and other core members, and helped them make better decisions. This looks more like many people making small updates rather than a few big actions. To use the animal example, the effect is likely closer to a lot of people becoming a bit warmer to animal welfare and factory farming mattering rather than a few people making big dietary changes. While sometimes this may lead to no practical effect (e.g. the uptick in sympathy for animal welfare dies down after a few months without leading to any dietary or other changes), in expectation the impact is positive.
Regarding 3, that’s not exactly what I meant. The post highlights big, persistent problems with EA reasoning and efforts due to structural factors. No single post can solve these problems. But I also think that progress on these issues is possible over time. One way is through increasing common knowledge of the problem—which I think your post does a great job of making progress on.