I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.
I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.
(People who hold this view might not find the usual Dutch book or representation theorem arguments compelling.)
I’ll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.
I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.
But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer’s curse. It doesn’t imply that taking the expected value is not the right solution to the idea of cluelessness.
But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer’s curse.
I’m not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.
The expected value of your actions is being estimated. Those estimates are based on subjective probabilities and can be well or poorly supported by evidence.
For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.
Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.
I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.
For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence
Yes, whether you are Bayesian or not, it means that the estimate is robust to unknown information.
I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory.
No, subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models. I don’t see why you would think otherwise.
As does what you have said about robustness and being well or poorly supported by evidence.
No, everything that has been written on the optimizer’s curse is perfectly compatible with subjective expected utility theory.
whether you are Bayesian or not, it means that the estimate is robust to unknown information
I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?
subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.
Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?
It means that your credence will change little (or a lot) depending on information which you don’t have.
For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence.
On the other hand, suppose I don’t talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not—the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won’t update to 55% credence, I’ll update to 51% or not at all.
Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.
It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.
My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.
It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.
Well in this case at least, it is apparent that the differences are caused by how well or poorly supported people’s beliefs are. It doesn’t say anything about variance in general.
My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.
Distribution P is your credence. So you are saying “I am worried that my credences don’t have to do with my credence.” That doesn’t make sense. And sure we’re uncertain of whether our beliefs are accurate, but I don’t see what the problem with that is.
Distribution P is your credence. So you are saying “I am worried that my credences don’t have to do with my credence.” That doesn’t make sense. And sure we’re uncertain of whether our beliefs are accurate, but I don’t see what the problem with that is.
I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”
I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.
I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a model, as is arguably often the case with EAs trying to optimize complex systems or the long-run future, then it is reasonable to ask why they should carry much epistemic / decision-theoretic weight.
(People who hold this view might not find the usual Dutch book or representation theorem arguments compelling.)
I’ll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.
Good points, but this seems to point to a weakness in the way we do modeling, not a weakness in expected value.
But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer’s curse. It doesn’t imply that taking the expected value is not the right solution to the idea of cluelessness.
I’m not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.
The expected value of your actions is being estimated. Those estimates are based on subjective probabilities and can be well or poorly supported by evidence.
For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.
Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.
I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.
Yes, whether you are Bayesian or not, it means that the estimate is robust to unknown information.
No, subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models. I don’t see why you would think otherwise.
No, everything that has been written on the optimizer’s curse is perfectly compatible with subjective expected utility theory.
I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?
Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?
It means that your credence will change little (or a lot) depending on information which you don’t have.
For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence.
On the other hand, suppose I don’t talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not—the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won’t update to 55% credence, I’ll update to 51% or not at all.
Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.
It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.
My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.
Well in this case at least, it is apparent that the differences are caused by how well or poorly supported people’s beliefs are. It doesn’t say anything about variance in general.
Distribution P is your credence. So you are saying “I am worried that my credences don’t have to do with my credence.” That doesn’t make sense. And sure we’re uncertain of whether our beliefs are accurate, but I don’t see what the problem with that is.
I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”