it seems like the crux is often the question of how easy it is to choose good priors
Before anything like a crux can be identified, complainants need to identify what a “good prior” even means, or what strategies are better than others. Until then, they’re not even wrong—it’s not even possible to say what disagreement exists. To airily talk about “good priors” or “bad priors”, being “easy” or “hard” to identify, is just empty phrasing and suggests confusion about rationality and probability.
Hey Kyle, I’d stopped responding since I felt like we were well beyond the point where we were likely to convince one another or say things that those reading the comments would find insightful.
I understand why you think “good prior” needs to be defined better.
As I try to communicate (but may not quite say explicitly) in my post, I think that in situations where uncertainty is poorly understood, it’s hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.
Venture capitalists frequently fund things that they’re extremely uncertain about. It’s my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don’t full understand themselves.
It could be that VCs have just failed to realize the amazingness of Bayesianism. However, given that they’re smart & there’s a ton of money on the table, I think the much more plausible explanation is that hardcore Bayesianism wouldn’t lead to better results than whatever it is that successful VCs actually do.
Again, none of this is to say that Bayesianism is fundamentally broken or that high-level Bayesian-ish things like “I have a very skeptical prior so I should not take this estimate of impact at face value” are crazy.
Venture capitalists frequently fund things that they’re extremely uncertain about. It’s my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don’t full understand themselves.
I interned for a VC, albeit a small and unknown one. Sure, they don’t do Bayesian calculations, if you want to be really precise. But they make extensive use of quantitative estimates all the same. If anything, they are cruder than what EAs do. As far as I know, they don’t bother correcting for the optimizer’s curse! I never heard it mentioned. VCs don’t primarily rely on the quantitative models, but other areas of finance do. If what they do is OK, then what EAs do is better. This is consistent with what finance professionals told me about the financial modeling that I did.
Plus, this is not about the optimizer’s curse. Imagine that you told those VCs that they were no longer choosing which startups are best, instead they now have to select which ones are better-than-average and which ones are worse-than-average. The optimizer’s curse will no longer interfere. Yet they’re not going to start relying more on explicit Bayesian calculations. They’re going to use the same way of thinking as always.
And explicit Bayesian calculation is rarely used by anyone anywhere. Humans encounter many problems which are not about optimizing, and they still don’t use explicit Bayesian calculation. So clearly the optimizer’s curse is not the issue. Instead, it’s a matter of which kinds of cognition and calculation people are more or less comfortable with.
it’s hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.
Explicit Bayesian calculation is a way of choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking. (With math too.)
I’m guessing you mean we should use intuition for the final selection, instead of quantitative estimates. OK, but I don’t see how the original post is supposed to back it up; I don’t see what the optimizer’s curse has to do with it.
Before anything like a crux can be identified, complainants need to identify what a “good prior” even means, or what strategies are better than others. Until then, they’re not even wrong—it’s not even possible to say what disagreement exists. To airily talk about “good priors” or “bad priors”, being “easy” or “hard” to identify, is just empty phrasing and suggests confusion about rationality and probability.
Hey Kyle, I’d stopped responding since I felt like we were well beyond the point where we were likely to convince one another or say things that those reading the comments would find insightful.
I understand why you think “good prior” needs to be defined better.
As I try to communicate (but may not quite say explicitly) in my post, I think that in situations where uncertainty is poorly understood, it’s hard to come up with priors that are good enough that choosing actions based explicit Bayesian calculations will lead to better outcomes than choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking.
As a real world example:
Venture capitalists frequently fund things that they’re extremely uncertain about. It’s my impression that Bayesian calculations rarely play into these situations. Instead, smart VCs think hard and critically and come to conclusions based on processes that they probably don’t full understand themselves.
It could be that VCs have just failed to realize the amazingness of Bayesianism. However, given that they’re smart & there’s a ton of money on the table, I think the much more plausible explanation is that hardcore Bayesianism wouldn’t lead to better results than whatever it is that successful VCs actually do.
Again, none of this is to say that Bayesianism is fundamentally broken or that high-level Bayesian-ish things like “I have a very skeptical prior so I should not take this estimate of impact at face value” are crazy.
I interned for a VC, albeit a small and unknown one. Sure, they don’t do Bayesian calculations, if you want to be really precise. But they make extensive use of quantitative estimates all the same. If anything, they are cruder than what EAs do. As far as I know, they don’t bother correcting for the optimizer’s curse! I never heard it mentioned. VCs don’t primarily rely on the quantitative models, but other areas of finance do. If what they do is OK, then what EAs do is better. This is consistent with what finance professionals told me about the financial modeling that I did.
Plus, this is not about the optimizer’s curse. Imagine that you told those VCs that they were no longer choosing which startups are best, instead they now have to select which ones are better-than-average and which ones are worse-than-average. The optimizer’s curse will no longer interfere. Yet they’re not going to start relying more on explicit Bayesian calculations. They’re going to use the same way of thinking as always.
And explicit Bayesian calculation is rarely used by anyone anywhere. Humans encounter many problems which are not about optimizing, and they still don’t use explicit Bayesian calculation. So clearly the optimizer’s curse is not the issue. Instead, it’s a matter of which kinds of cognition and calculation people are more or less comfortable with.
Explicit Bayesian calculation is a way of choosing actions based on a combination of careful skepticism, information gathering, hunches, and critical thinking. (With math too.)
I’m guessing you mean we should use intuition for the final selection, instead of quantitative estimates. OK, but I don’t see how the original post is supposed to back it up; I don’t see what the optimizer’s curse has to do with it.