Just saw this comment, I’m also super late to the party responding to you!
It actually seems to me it might have been worth emphasising more, as I think a casual reader could think this post was a critique of formal/explicit/quantitative models in particular.
Totally agree! Honestly, I had several goals with this post, and I almost complete failed on two of them:
Arguing why utilitarianism can’t be the foundation of ethics.
Without talking much about AI, explaining why I don’t think people in the EA community are being reasonable when they suggest there’s a decent chance of an AGI being developed in the near future.
Instead, I think this post came off as primarily a criticism of certain kinds of models and a criticism of GiveWell’s approach to prioritization (which is unfortunate since I think the Optimizer’s Curse isn’t as big an issue for GiveWell & global health as it is for many other EA orgs/cause areas).
--
On the second piece of your comment, I think we mostly agree. Informal/cluster-style thinking is probably helpful, but it definitely doesn’t make the Optimizer’s Curse a non-issue.
I largely agree with what you said in this comment, though I’d say the line between data collection and data processing is often blurred in real-world scenarios.
I think we are talking past each other (not in a bad faith way though!), so I want to stop myself from digging us deeper into an unproductive rabbit hole.