Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thanks for this post! As someone who’s agonized over some career (and other) decisions, I really appreciate it. It also seems to apply for e.g. shallow investigations into potential problems/causes (e.g., topic). Also, I love the graphs.
A few relevant posts and thoughts:
The author’s other post on the topic is very relevant: Use resilience, instead of imprecision, to communicate uncertainty
For some of the advice in this post (like on flakiness, parallel trial and evaluation, etc.), I’d also point people to: Should you reverse any advice you hear?
For people who haven’t used a google doc to ask people for comments on a big decision: An easy win for hard decisions.
Thanks for this! I wonder how common or rare the third [edit: oops, meant “fourth”] type of graph is. I have an intuition that there’s something weird or off about having beliefs that act that way (or thinking you do), but I’m having trouble formalizing why. Some attempts:
If you think you’re at (say) the upper half of a persistent range of volatility, that means you expect to update downward as you learn more. So you should just make the update proactively, bringing your confidence toward medium (and narrowing volatility around medium confidence).
Special case (?): if you’re reading or hearing a debate and your opinion keeps wildly swinging back and forth, at some point you should probably think, “well, I guess I’m bad at evaluating these arguments; probably I should stop strongly updating based on whether I find them compelling.”
For many estimators, variance decreases as you get more independent samples.
At the (unrealistic) limit of deliberation, you’ve seen and considered everything, and then there’s no more room for volatility.
Succinctly, beliefs should behave like a martingale, and the third and fourth graphs are probably not a martingale. It’s possible to update based on your expected evidence and still get graphs like in 3 or 4, but this means you’re in an actually unlikely world.
That said, I think it’s good to keep track of emotional updates as well as logical Bayesian ones, and those can behave however.
Thanks. Perhaps with the benefit of hindsight the blue envelopes probably should have been dropped from the graph, leaving the trace alone:
As you and Kwa note, having a ‘static’ envelope you are bumbling between looks like a violation of the martingale property—the envelope should be tracking the current value more (but I was too lazy to draw that).
I agree all else equal you should expect resilience to increase with more deliberation—as you say, you are moving towards the limit of perfect knowledge with more work. Perhaps graph 3 and 4 [I’ve added numbers to make referring easier] could signal that you’re moving from 10.1% to 10.2% in this hypothetical range from ignorance to omniscience.
Related to Kwa’s point, another benefit of tracking one’s beliefs is not only figuring out when to terminate deliberation, but also to ‘keep score’ about how rational one’s beliefs appear to be. Continued volatility (in G3, but also G4) could mean you are (rationally) in the situation where your weak prior is getting buffetted by a lot of strong evidence; but it could also mean you are under-damped and over-updating.
This seems sort of obvious so maybe I’m missing something?
Imagine there are two types of bins. One bin only has red balls. The other bin has both red and yellow balls in equal proportion.
You have one bin and you don’t know which one. You pick up balls successively from the bin and you are making an estimator on the color of the ball you pick up.
Imagine picking up 5 balls in a row that are red. You logically believe that the next ball is will be red with more than 50% probability.
Then, for the 6th ball, it’s yellow and you’re back to 50%.
I think the analogy of the bins seems dynamic but apply for assessing reports, when you’re trying to figure out an latent state and there isn’t a time element.
There’s many situations in the world that seem to be like this, but it feels ideological or sophomoric to say it?
Fujiyama the end of history, where confidence seems misplaced on the dominance and stability of democracy
Qing dynasty belief in its material superiority over outsiders
Reading Charles He’s forum comment history and deciding if he’s reasonable
(I’m understanding your comment as providing an example of a situation where volatility goes to 0 with additional evidence.)
I agree it’s clear that this happens in some situations—it’s less immediately obvious to me whether this happens in every possible situation.
(Feel free to let me know if I misread. I’m also not sure what you mean by “like this.”)
I think “volatility” (being able to predict yellow or red ball) is going higher?
But I feel like there is a real chance I’m talking past you and maybe wrong?
For example, you might be talking about forming beliefs about volatility. In my example, beliefs about volatility upon seeing the yellow ball are now more stable over time (even if volatility rises) as you know which bin you’re drawing from.
I guess I’m just repeating my example, where searches or explorations are revealing something like “a new latent state”, so that previous information that was being used to form beliefs, are no longer relevant.
It’s true this statement doesn’t have much evidence behind it (but partially because I’m sort of confused now what exactly the example is talking about).
Ok, I didn’t understand the OP’s examples or what he was saying (so I missed sort of the point of his post). So I think he’s saying in the fourth example the range of reasonable beliefs could increase over time by collecting more information.
This seems unlikely and unnatural so I think you’re right. I retract my comment.
Ah sorry, I meant to use “volatility” to refer to something like “expected variance in one’s estimate of their future beliefs,” which is maybe what you refer to as “beliefs about volatility.”
FYI, I was making a difficult career decision a few months ago and found this post helpful. Thanks for writing it!
I really like idea here and think it’s presented well. (A+ use of illustrative graphs.) The tradeoff of “invest more in pondering vs invest in exploring object-level options” is very common.
Two thoughts I’d like to add to this post:
re-initating deliberation & non-monotonic credence
I think that the credal ranges are not monotonically narrowing, mostly because we’re imperfect/bounded reasoners.
There are events in peoples lives / observations / etc that cause us to realize that we’ve incorrectly narrowed credence in the past and must now re-expand our uncertainty.
This theory still makes a lot of sense in that world—where termination might be followed up by re-initiation in the future, and uncertainty-expanding events would constitute a clear trigger for that re-initiation.
Value of information for updating our meta-uncertainty
Given the above point about our judgement on when/how to narrow credal ranges being flawed, I think we should care about improving at that meta-judgement.
This adds an additional value of information to pondering more—that we improve our judgement for when to stop pondering.
I think this is important to call out because this update is highly asymmetric—it’s much easier to get feedback that you pondered too long (by doing extra ponding for very little update) than to get feedback that you pondered too short (because you don’t know what you’d think if you pondered longer).
In cases where there is this very asymmetric value of information, I think a useful heuristic is “if in doubt, ponder too long, rather than too short” (this doesn’t really account for that fact that its not Yes/No as much as it is opportunity cost of other actions, but hopefully the heuristic can be adapted to be useful)
(Coda: this seems more like rationality than the modal EA forum post—maybe would get additional useful/insightful comments on LW)
This post was great.
I feel like my thinking around my daily diet is a bit like the third graph (should I be vegan? Should I not care because my daily meal choices are small compared to what I do with my career/how productive I am, if I have a high enough probability of getting myself on a high impact career pathway? I find considerations just tend to bounce my around rather than me settling on a confident view despite having thought about this on and off for many years)
This was great. This question may be too meta for its own good:
Are there plausible situations where the trend of volatility isn’t stable over time? I.e. if the blue-lined envelope appears to be narrowing over deliberative effort, but then again expands wildly, or vice-versa. Call it ‘chaotic volatility’ for reference.
This might be just an extreme version of the fourth graph, but it actually seems even worse. At least in the fourth graph you might be able to recognise you’re in stable or worsening volatility—in chaotic volatility you could be mistaken about what kind of epistemic situation you’re in. You could think there’s little further to be gained, but you’re actually just before a period of narrowing volatility. Or think you’re settled and confident, but with a little more deliberation a new planet swims into your ken.
One example I could think of is if someone in the general public is doing some standard career-choice agonising, doing trials, talking to people, etc. and is getting greater resilience for an option. And then on a little further reading they find 80,000 Hours and EA, and all of a sudden there’s a ton more to think about and their previous resilience breaks.
I don’t know if anything action-relevant comes from considering this situation, beyond what the post already laid out. Maybe it’s just trying to keep an eye out for possible quasi-‘crucial considerations’ for their own choices or something.
I want to print a poster with your last paragraph
What a wonderful post!
I wonder if credal resilience (from the reasoner side) is the same as belief stability (from the belief side).
This could be turned into an easy online decision-support tool: input your goal, input your success metrics, guess your range of low-high impact by pursuing option X vs. opportunity cost of option Z, and how certain do you feel about your decision? Would one of the following increase your confidence: [set of options for decision support]. If you are building something, let me know.
I second machinaut and Benjamin Stewart’s comments.
My current work is in the area of rationality, (mis)information and information search, where new info gained could help uncover own biases or add weight to the alternative decision (while arguments continue to aggregate). In addition to narrowing down the corridor for a change of mind over time, there is a chance that a qualitative epistemic leap may occur (e.g., when your horizon of available options expands through new ‘unknown unknowns’ info, or when a new larger framework is uncovered that requires reappraisal). Here the range of options expands, before narrowing down again—subjective uncertainty in the shape of a fir tree pointed to the right. Including these considerations in decisions might not be too hard with a bit of training.
Moreover, a decision could be transformed by analyzing features of the options and choosing a third best of both option or no decision at all. Not sure how to represent these.
While the volatility from unknown unknowns might seem to support epistemic relativism at first, any new information warranting an expansion would seem to also imply a broader or more complex view. Over time, it becomes increasingly unlikely to find such new information that supports a 1Upped worldview. So after initial known types of resources are exhausted, and credal resilience increases, one can reasonably settle for a decision—while remaining open to ‘game-changing information’. But if game-changing information is obtained, one could also be excused to reappraise and reverse the earlier decision; in this case reversibility/transition paths should be considered more prominently to minimize sunk cost.
Nah. Never terminate deliberation.
;-)