I agree that 1) is possible, but I don’t think it’s likely that there are many large actions that were changed as a result, since I’d have heard of at least one. One thing that drives my thinking here is that EA is just a fairly small movement in absolute terms, and many/most decisions are made by a small subset of people. If I optimized for a very public-facing forum (e.g. made a TikTok or internet meme convincing people to be vegetarian) I’d be less sure information about its impact would’ve reached me. (But even then it’d be hard to claim e.g. >100 made large dietary changes if I can’t even trace 1)
For 2), I agree improving discourse is important and influential. I guess I’m not sure what the sign is. If it gets cited a bunch but none of the citations ended up improving people’s quality of thinking or decisions, then this just multiplies the inefficiency. In comparison I think my key numbers question post, while taking substantially less time from either myself or readers, likely resulted in having changes to the discourse in a positive way (making EA more quantitative). It’s substantially less splashy, but I think this is what intellectual/cultural progress looks like.
I also think the motivated reasoning post contributed to EA being overly meta, though I think this is probably a fair critique for a large number of my posts and/or activities in general.
For 3), if I understand your perspective correctly, a summary is that my post will foreseeably not have a large positive impact if it’s true. (and presumably also not much of an impact if it’s false). I guess if a post foreseeably will not have large effects commiserate with the opportunity costs, then this is more rather than less damning on my own judgement.
Regarding 1, I agree that it’s unlikely that your post directly resulted in any large action changes. However, I would be surprised if it didn’t have small effects on many people, including non-EAs or non-core EAs socially distant from you and other core members, and helped them make better decisions. This looks more like many people making small updates rather than a few big actions. To use the animal example, the effect is likely closer to a lot of people becoming a bit warmer to animal welfare and factory farming mattering rather than a few people making big dietary changes. While sometimes this may lead to no practical effect (e.g. the uptick in sympathy for animal welfare dies down after a few months without leading to any dietary or other changes), in expectation the impact is positive.
Regarding 3, that’s not exactly what I meant. The post highlights big, persistent problems with EA reasoning and efforts due to structural factors. No single post can solve these problems. But I also think that progress on these issues is possible over time. One way is through increasing common knowledge of the problem—which I think your post does a great job of making progress on.
I agree that 1) is possible, but I don’t think it’s likely that there are many large actions that were changed as a result, since I’d have heard of at least one. One thing that drives my thinking here is that EA is just a fairly small movement in absolute terms, and many/most decisions are made by a small subset of people. If I optimized for a very public-facing forum (e.g. made a TikTok or internet meme convincing people to be vegetarian) I’d be less sure information about its impact would’ve reached me. (But even then it’d be hard to claim e.g. >100 made large dietary changes if I can’t even trace 1)
For 2), I agree improving discourse is important and influential. I guess I’m not sure what the sign is. If it gets cited a bunch but none of the citations ended up improving people’s quality of thinking or decisions, then this just multiplies the inefficiency. In comparison I think my key numbers question post, while taking substantially less time from either myself or readers, likely resulted in having changes to the discourse in a positive way (making EA more quantitative). It’s substantially less splashy, but I think this is what intellectual/cultural progress looks like.
I also think the motivated reasoning post contributed to EA being overly meta, though I think this is probably a fair critique for a large number of my posts and/or activities in general.
For 3), if I understand your perspective correctly, a summary is that my post will foreseeably not have a large positive impact if it’s true. (and presumably also not much of an impact if it’s false). I guess if a post foreseeably will not have large effects commiserate with the opportunity costs, then this is more rather than less damning on my own judgement.
Regarding 1, I agree that it’s unlikely that your post directly resulted in any large action changes. However, I would be surprised if it didn’t have small effects on many people, including non-EAs or non-core EAs socially distant from you and other core members, and helped them make better decisions. This looks more like many people making small updates rather than a few big actions. To use the animal example, the effect is likely closer to a lot of people becoming a bit warmer to animal welfare and factory farming mattering rather than a few people making big dietary changes. While sometimes this may lead to no practical effect (e.g. the uptick in sympathy for animal welfare dies down after a few months without leading to any dietary or other changes), in expectation the impact is positive.
Regarding 3, that’s not exactly what I meant. The post highlights big, persistent problems with EA reasoning and efforts due to structural factors. No single post can solve these problems. But I also think that progress on these issues is possible over time. One way is through increasing common knowledge of the problem—which I think your post does a great job of making progress on.