The biggest self-critique of the post is less conceptual than empirical: I think this post generated a lot of heat. But I’m not aware of useful results from the post (e.g. clearer thinking on important decisions, better actions, etc). So I think it wasn’t a particularly useful post overall, and likely not worth either my time writing it or readers’ time reading it.
Going forwards, I will focus my critiques to be more precise and action-oriented, with clearer/more precise recommendations for how individuals or organizations can change.
I think you might be overinterpreting the lack of legible useful results from the post.
People may have changed their actions privately because of your post without telling you about it. If they are sufficiently socially distant from you, you may never find out.
The post has been cited in 11 other forum posts, which indicates it made a mark on the discourse. These discourse effects can be hard to quantify or concretize, but in my view improving discourse is important and influential, and difficulties with legibility do not reduce impact, only the legibility of impact.
If the content of your post is correct—that motivated reasoning is pervasive in EA—why should you expect simply writing a post to that effect to cause big changes? Your post doesn’t solve the selection bias problem the post mentions, or the lack of feedback loops and importance of social connections in EA, or incentives towards motivated reasoning.
I think trying to write all critiques such that they’re precise and action-oriented is a mistake, leaving much value on the table.
I agree that 1) is possible, but I don’t think it’s likely that there are many large actions that were changed as a result, since I’d have heard of at least one. One thing that drives my thinking here is that EA is just a fairly small movement in absolute terms, and many/most decisions are made by a small subset of people. If I optimized for a very public-facing forum (e.g. made a TikTok or internet meme convincing people to be vegetarian) I’d be less sure information about its impact would’ve reached me. (But even then it’d be hard to claim e.g. >100 made large dietary changes if I can’t even trace 1)
For 2), I agree improving discourse is important and influential. I guess I’m not sure what the sign is. If it gets cited a bunch but none of the citations ended up improving people’s quality of thinking or decisions, then this just multiplies the inefficiency. In comparison I think my key numbers question post, while taking substantially less time from either myself or readers, likely resulted in having changes to the discourse in a positive way (making EA more quantitative). It’s substantially less splashy, but I think this is what intellectual/cultural progress looks like.
I also think the motivated reasoning post contributed to EA being overly meta, though I think this is probably a fair critique for a large number of my posts and/or activities in general.
For 3), if I understand your perspective correctly, a summary is that my post will foreseeably not have a large positive impact if it’s true. (and presumably also not much of an impact if it’s false). I guess if a post foreseeably will not have large effects commiserate with the opportunity costs, then this is more rather than less damning on my own judgement.
Regarding 1, I agree that it’s unlikely that your post directly resulted in any large action changes. However, I would be surprised if it didn’t have small effects on many people, including non-EAs or non-core EAs socially distant from you and other core members, and helped them make better decisions. This looks more like many people making small updates rather than a few big actions. To use the animal example, the effect is likely closer to a lot of people becoming a bit warmer to animal welfare and factory farming mattering rather than a few people making big dietary changes. While sometimes this may lead to no practical effect (e.g. the uptick in sympathy for animal welfare dies down after a few months without leading to any dietary or other changes), in expectation the impact is positive.
Regarding 3, that’s not exactly what I meant. The post highlights big, persistent problems with EA reasoning and efforts due to structural factors. No single post can solve these problems. But I also think that progress on these issues is possible over time. One way is through increasing common knowledge of the problem—which I think your post does a great job of making progress on.
I think you might be overinterpreting the lack of legible useful results from the post.
People may have changed their actions privately because of your post without telling you about it. If they are sufficiently socially distant from you, you may never find out.
The post has been cited in 11 other forum posts, which indicates it made a mark on the discourse. These discourse effects can be hard to quantify or concretize, but in my view improving discourse is important and influential, and difficulties with legibility do not reduce impact, only the legibility of impact.
If the content of your post is correct—that motivated reasoning is pervasive in EA—why should you expect simply writing a post to that effect to cause big changes? Your post doesn’t solve the selection bias problem the post mentions, or the lack of feedback loops and importance of social connections in EA, or incentives towards motivated reasoning.
I think trying to write all critiques such that they’re precise and action-oriented is a mistake, leaving much value on the table.
I agree that 1) is possible, but I don’t think it’s likely that there are many large actions that were changed as a result, since I’d have heard of at least one. One thing that drives my thinking here is that EA is just a fairly small movement in absolute terms, and many/most decisions are made by a small subset of people. If I optimized for a very public-facing forum (e.g. made a TikTok or internet meme convincing people to be vegetarian) I’d be less sure information about its impact would’ve reached me. (But even then it’d be hard to claim e.g. >100 made large dietary changes if I can’t even trace 1)
For 2), I agree improving discourse is important and influential. I guess I’m not sure what the sign is. If it gets cited a bunch but none of the citations ended up improving people’s quality of thinking or decisions, then this just multiplies the inefficiency. In comparison I think my key numbers question post, while taking substantially less time from either myself or readers, likely resulted in having changes to the discourse in a positive way (making EA more quantitative). It’s substantially less splashy, but I think this is what intellectual/cultural progress looks like.
I also think the motivated reasoning post contributed to EA being overly meta, though I think this is probably a fair critique for a large number of my posts and/or activities in general.
For 3), if I understand your perspective correctly, a summary is that my post will foreseeably not have a large positive impact if it’s true. (and presumably also not much of an impact if it’s false). I guess if a post foreseeably will not have large effects commiserate with the opportunity costs, then this is more rather than less damning on my own judgement.
Regarding 1, I agree that it’s unlikely that your post directly resulted in any large action changes. However, I would be surprised if it didn’t have small effects on many people, including non-EAs or non-core EAs socially distant from you and other core members, and helped them make better decisions. This looks more like many people making small updates rather than a few big actions. To use the animal example, the effect is likely closer to a lot of people becoming a bit warmer to animal welfare and factory farming mattering rather than a few people making big dietary changes. While sometimes this may lead to no practical effect (e.g. the uptick in sympathy for animal welfare dies down after a few months without leading to any dietary or other changes), in expectation the impact is positive.
Regarding 3, that’s not exactly what I meant. The post highlights big, persistent problems with EA reasoning and efforts due to structural factors. No single post can solve these problems. But I also think that progress on these issues is possible over time. One way is through increasing common knowledge of the problem—which I think your post does a great job of making progress on.