Thanks for this! I wonder how common or rare the third [edit: oops, meant “fourth”] type of graph is. I have an intuition that there’s something weird or off about having beliefs that act that way (or thinking you do), but I’m having trouble formalizing why. Some attempts:
If you think you’re at (say) the upper half of a persistent range of volatility, that means you expect to update downward as you learn more. So you should just make the update proactively, bringing your confidence toward medium (and narrowing volatility around medium confidence).
Special case (?): if you’re reading or hearing a debate and your opinion keeps wildly swinging back and forth, at some point you should probably think, “well, I guess I’m bad at evaluating these arguments; probably I should stop strongly updating based on whether I find them compelling.”
For many estimators, variance decreases as you get more independent samples.
At the (unrealistic) limit of deliberation, you’ve seen and considered everything, and then there’s no more room for volatility.
Succinctly, beliefs should behave like a martingale, and the third and fourth graphs are probably not a martingale. It’s possible to update based on your expected evidence and still get graphs like in 3 or 4, but this means you’re in an actually unlikely world.
That said, I think it’s good to keep track of emotional updates as well as logical Bayesian ones, and those can behave however.
Thanks. Perhaps with the benefit of hindsight the blue envelopes probably should have been dropped from the graph, leaving the trace alone:
As you and Kwa note, having a ‘static’ envelope you are bumbling between looks like a violation of the martingale property—the envelope should be tracking the current value more (but I was too lazy to draw that).
I agree all else equal you should expect resilience to increase with more deliberation—as you say, you are moving towards the limit of perfect knowledge with more work. Perhaps graph 3 and 4 [I’ve added numbers to make referring easier] could signal that you’re moving from 10.1% to 10.2% in this hypothetical range from ignorance to omniscience.
Related to Kwa’s point, another benefit of tracking one’s beliefs is not only figuring out when to terminate deliberation, but also to ‘keep score’ about how rational one’s beliefs appear to be. Continued volatility (in G3, but also G4) could mean you are (rationally) in the situation where your weak prior is getting buffetted by a lot of strong evidence; but it could also mean you are under-damped and over-updating.
This seems sort of obvious so maybe I’m missing something?
Imagine there are two types of bins. One bin only has red balls. The other bin has both red and yellow balls in equal proportion.
You have one bin and you don’t know which one. You pick up balls successively from the bin and you are making an estimator on the color of the ball you pick up.
Imagine picking up 5 balls in a row that are red. You logically believe that the next ball is will be red with more than 50% probability.
Then, for the 6th ball, it’s yellow and you’re back to 50%.
I think the analogy of the bins seems dynamic but apply for assessing reports, when you’re trying to figure out an latent state and there isn’t a time element.
There’s many situations in the world that seem to be like this, but it feels ideological or sophomoric to say it?
Fujiyama the end of history, where confidence seems misplaced on the dominance and stability of democracy
Qing dynasty belief in its material superiority over outsiders
Reading Charles He’s forum comment history and deciding if he’s reasonable
(I’m understanding your comment as providing an example of a situation where volatility goes to 0 with additional evidence.)
I think “volatility” (being able to predict yellow or red ball) is going higher?
But I feel like there is a real chance I’m talking past you and maybe wrong?
For example, you might be talking about forming beliefs about volatility. In my example, beliefs about volatility upon seeing the yellow ball are now more stable over time (even if volatility rises) as you know which bin you’re drawing from.
(Feel free to let me know if I misread. I’m also not sure what you mean by “like this.”)
I guess I’m just repeating my example, where searches or explorations are revealing something like “a new latent state”, so that previous information that was being used to form beliefs, are no longer relevant.
It’s true this statement doesn’t have much evidence behind it (but partially because I’m sort of confused now what exactly the example is talking about).
Ok, I didn’t understand the OP’s examples or what he was saying (so I missed sort of the point of his post). So I think he’s saying in the fourth example the range of reasonable beliefs could increase over time by collecting more information.
This seems unlikely and unnatural so I think you’re right. I retract my comment.
Ah sorry, I meant to use “volatility” to refer to something like “expected variance in one’s estimate of their future beliefs,” which is maybe what you refer to as “beliefs about volatility.”
Thanks for this! I wonder how common or rare the third [edit: oops, meant “fourth”] type of graph is. I have an intuition that there’s something weird or off about having beliefs that act that way (or thinking you do), but I’m having trouble formalizing why. Some attempts:
If you think you’re at (say) the upper half of a persistent range of volatility, that means you expect to update downward as you learn more. So you should just make the update proactively, bringing your confidence toward medium (and narrowing volatility around medium confidence).
Special case (?): if you’re reading or hearing a debate and your opinion keeps wildly swinging back and forth, at some point you should probably think, “well, I guess I’m bad at evaluating these arguments; probably I should stop strongly updating based on whether I find them compelling.”
For many estimators, variance decreases as you get more independent samples.
At the (unrealistic) limit of deliberation, you’ve seen and considered everything, and then there’s no more room for volatility.
Succinctly, beliefs should behave like a martingale, and the third and fourth graphs are probably not a martingale. It’s possible to update based on your expected evidence and still get graphs like in 3 or 4, but this means you’re in an actually unlikely world.
That said, I think it’s good to keep track of emotional updates as well as logical Bayesian ones, and those can behave however.
Thanks. Perhaps with the benefit of hindsight the blue envelopes probably should have been dropped from the graph, leaving the trace alone:
As you and Kwa note, having a ‘static’ envelope you are bumbling between looks like a violation of the martingale property—the envelope should be tracking the current value more (but I was too lazy to draw that).
I agree all else equal you should expect resilience to increase with more deliberation—as you say, you are moving towards the limit of perfect knowledge with more work. Perhaps graph 3 and 4 [I’ve added numbers to make referring easier] could signal that you’re moving from 10.1% to 10.2% in this hypothetical range from ignorance to omniscience.
Related to Kwa’s point, another benefit of tracking one’s beliefs is not only figuring out when to terminate deliberation, but also to ‘keep score’ about how rational one’s beliefs appear to be. Continued volatility (in G3, but also G4) could mean you are (rationally) in the situation where your weak prior is getting buffetted by a lot of strong evidence; but it could also mean you are under-damped and over-updating.
This seems sort of obvious so maybe I’m missing something?
Imagine there are two types of bins. One bin only has red balls. The other bin has both red and yellow balls in equal proportion.
You have one bin and you don’t know which one. You pick up balls successively from the bin and you are making an estimator on the color of the ball you pick up.
Imagine picking up 5 balls in a row that are red. You logically believe that the next ball is will be red with more than 50% probability.
Then, for the 6th ball, it’s yellow and you’re back to 50%.
I think the analogy of the bins seems dynamic but apply for assessing reports, when you’re trying to figure out an latent state and there isn’t a time element.
There’s many situations in the world that seem to be like this, but it feels ideological or sophomoric to say it?
Fujiyama the end of history, where confidence seems misplaced on the dominance and stability of democracy
Qing dynasty belief in its material superiority over outsiders
Reading Charles He’s forum comment history and deciding if he’s reasonable
(I’m understanding your comment as providing an example of a situation where volatility goes to 0 with additional evidence.)
I agree it’s clear that this happens in some situations—it’s less immediately obvious to me whether this happens in every possible situation.
(Feel free to let me know if I misread. I’m also not sure what you mean by “like this.”)
I think “volatility” (being able to predict yellow or red ball) is going higher?
But I feel like there is a real chance I’m talking past you and maybe wrong?
For example, you might be talking about forming beliefs about volatility. In my example, beliefs about volatility upon seeing the yellow ball are now more stable over time (even if volatility rises) as you know which bin you’re drawing from.
I guess I’m just repeating my example, where searches or explorations are revealing something like “a new latent state”, so that previous information that was being used to form beliefs, are no longer relevant.
It’s true this statement doesn’t have much evidence behind it (but partially because I’m sort of confused now what exactly the example is talking about).
Ok, I didn’t understand the OP’s examples or what he was saying (so I missed sort of the point of his post). So I think he’s saying in the fourth example the range of reasonable beliefs could increase over time by collecting more information.
This seems unlikely and unnatural so I think you’re right. I retract my comment.
Ah sorry, I meant to use “volatility” to refer to something like “expected variance in one’s estimate of their future beliefs,” which is maybe what you refer to as “beliefs about volatility.”