Late to the party, but I was re-reading this as it relates to another post Iām working on, and I realised I have a question. You write: (note that I say āyouā in this comment a lot, but Iād also be interested in anyone elseās thoughts on my questions)
The optimizerās curse can show up even in situations where effective altruistsā prioritization decisions donāt involve formal models or explicit estimates of expected value. Someone informally assessing philanthropic opportunities in a linear manner might have a thought like:
Thing X seems like an awfully big issue. Funding Group A would probably cost only a little bit of money and have a small chance leading to a solution for Thing X. Accordingly, I feel decent about the expected cost-effectiveness of funding Group A.
Let me compare that to how I feel about some other funding opportunitiesā¦
Although the thinking is informal, thereās uncertainty, potential for bias, and an optimization-like process.
That makes sense to me, and seems a very worthwhile point. (It actually seems to me it might have been worth emphasising more, as I think a casual reader could think this post was a critique of formal/āexplicit/āquantitative models in particular.)
But then in a footnote, you add:
Informal thinking isnāt always this linear. If the informal thinking considers an opportunity from multiple perspectives, draws on intuitions, etc., the risk of postdecision surprise may be reduced.
Iām not sure I understand what you mean by that, or if itās true/āmakes sense. It seems to me that, ultimately, if weāre engaging in a process that effectively provides a ranking of how good the options seem (whether based on cost-effectiveness estimates or just how we āfeelā about them), and thereās uncertainty involved, and we pick the option that seems to come out on top, the optimizerās curse will be relevant. Even if we use multiple separate informal ways of looking at the problem, we still ultimately end up with a top ranked option, and, given that that optionās ended up on top, we should still expect that errors have inflated its apparent value (whether thatās in numerical terms or in terms of how we feel) more than average. Right?
Or did you simply mean that using multiple perspectives means that the various different errors and uncertainties might be more likely to balance out (in the same sort of way that converging lines of evidence based on different methodologies make us more confident that weāve really found something real), and that, given that thereād effectively be less uncertainty, the significance of the optimizerās curse would be smaller. (This seems to fit with āthe risk of postdecision surprise may be reducedā.)
If thatās what you meant, that seems reasonable to me, but it seems that we could get the same sort of benefits just by doing something like gathering more data or improving our formal models. (Though of course that may often be more expensive and difficult than cluster thinking, so highlighting that we also have the option of cluster thinking does seem useful.)
Just saw this comment, Iām also super late to the party responding to you!
It actually seems to me it might have been worth emphasising more, as I think a casual reader could think this post was a critique of formal/āexplicit/āquantitative models in particular.
Totally agree! Honestly, I had several goals with this post, and I almost complete failed on two of them:
Arguing why utilitarianism canāt be the foundation of ethics.
Without talking much about AI, explaining why I donāt think people in the EA community are being reasonable when they suggest thereās a decent chance of an AGI being developed in the near future.
Instead, I think this post came off as primarily a criticism of certain kinds of models and a criticism of GiveWellās approach to prioritization (which is unfortunate since I think the Optimizerās Curse isnāt as big an issue for GiveWell & global health as it is for many other EA orgs/ācause areas).
-- On the second piece of your comment, I think we mostly agree. Informal/ācluster-style thinking is probably helpful, but it definitely doesnāt make the Optimizerās Curse a non-issue.
Late to the party, but I was re-reading this as it relates to another post Iām working on, and I realised I have a question. You write: (note that I say āyouā in this comment a lot, but Iād also be interested in anyone elseās thoughts on my questions)
That makes sense to me, and seems a very worthwhile point. (It actually seems to me it might have been worth emphasising more, as I think a casual reader could think this post was a critique of formal/āexplicit/āquantitative models in particular.)
But then in a footnote, you add:
Iām not sure I understand what you mean by that, or if itās true/āmakes sense. It seems to me that, ultimately, if weāre engaging in a process that effectively provides a ranking of how good the options seem (whether based on cost-effectiveness estimates or just how we āfeelā about them), and thereās uncertainty involved, and we pick the option that seems to come out on top, the optimizerās curse will be relevant. Even if we use multiple separate informal ways of looking at the problem, we still ultimately end up with a top ranked option, and, given that that optionās ended up on top, we should still expect that errors have inflated its apparent value (whether thatās in numerical terms or in terms of how we feel) more than average. Right?
Or did you simply mean that using multiple perspectives means that the various different errors and uncertainties might be more likely to balance out (in the same sort of way that converging lines of evidence based on different methodologies make us more confident that weāve really found something real), and that, given that thereād effectively be less uncertainty, the significance of the optimizerās curse would be smaller. (This seems to fit with āthe risk of postdecision surprise may be reducedā.)
If thatās what you meant, that seems reasonable to me, but it seems that we could get the same sort of benefits just by doing something like gathering more data or improving our formal models. (Though of course that may often be more expensive and difficult than cluster thinking, so highlighting that we also have the option of cluster thinking does seem useful.)
Just saw this comment, Iām also super late to the party responding to you!
Totally agree! Honestly, I had several goals with this post, and I almost complete failed on two of them:
Arguing why utilitarianism canāt be the foundation of ethics.
Without talking much about AI, explaining why I donāt think people in the EA community are being reasonable when they suggest thereās a decent chance of an AGI being developed in the near future.
Instead, I think this post came off as primarily a criticism of certain kinds of models and a criticism of GiveWellās approach to prioritization (which is unfortunate since I think the Optimizerās Curse isnāt as big an issue for GiveWell & global health as it is for many other EA orgs/ācause areas).
--
On the second piece of your comment, I think we mostly agree. Informal/ācluster-style thinking is probably helpful, but it definitely doesnāt make the Optimizerās Curse a non-issue.