This concern seems to bubble up from time to time in EA, but seems—as you say—rarely addressed head on. I wonder how often we privately believe the cause we’re championing can’t possibly be as good as our naive expected value estimate, but is still probably “very good”.
Great post. I explored the same in rougher format in draft amnesty week last year: https://forum.effectivealtruism.org/posts/nQYW5nq9iKpCKnYBj/question-how-to-form-beliefs-about-effectiveness-in-high
We came up with basically the same toy model!
This concern seems to bubble up from time to time in EA, but seems—as you say—rarely addressed head on. I wonder how often we privately believe the cause we’re championing can’t possibly be as good as our naive expected value estimate, but is still probably “very good”.