While I agree that the optimizer’s curse is a problem, and one that is relevant for certain sectors of EA, I will also say that given the very high variance in expected impact between causes, this is much less of a problem than other problems in EA epistemics, which is why it hasn’t received much attention.
That said, you do note some very interesting things about the optimizer’s curse, so the post is valuable beyond restating the problem, so I will give credit where it’s due, it’s a nice incremental improvement.
An underrated success story which I find more plausible than moral circle expansion is that human concern for animal welfare (both wild and farmed animal welfare) remains low, but due to a combo of causal moral trade and acausal trade like Evidential Cooperation in Large Worlds, animals/animal advocates across the multiverse get most of what they want because it’s cheap and easy to coordinate.
In general, I notice that trade-based futures aren’t shown, when I tend to think they are by far the most likely way most beings get most of what they want almost regardless of values, if we can somehow prevent threats/blackmail from eating into expected value.