“We should ponder until it no longer feels right to ponder, and then to choose one of the acts it feels most right to choose… If pondering comes at a cost, we should ponder only if it seems we will be able to separate better options from worse options quickly enough to warrant the pondering” - Trammell
Introduction
Many choices are both highly uncertain, highly consequential, and difficult to develop a highly-confident impression of which option is best. In EA-land, the archetypal example is career choice, but similar dilemmas are common in corporate decision-making (e.g. grant-making, organisational strategy) and life generally (e.g. “Should I marry Alice?” “Should I have children with Bob?”).
Thinking carefully about these choices is wise, and my impression is people tend to err in the direction of too little contemplation—stumbling over important thresholds that set the stage for the rest of their lives. One of the (perhaps the main) key messages of effective altruism is our altruistic efforts err in a similar direction. Pace more cynical explanations, I think (e.g.) the typical charitable donor (or typical person) is “using reason to try and do good”. Yet they use reason too little and decide too soon whilst the ‘returns to (further) reason’ remain high, and so typically fall far short of what they could have accomplished.
One can still have too much of a good thing: ‘reasonableness’ ‘prudence’ or even ‘caution’ can be wise, but ‘indecisiveness’ less so. I suspect many can recall occasions of “analysis paralysis”, or even when they suspect prolonged fretting worsened the quality of the decision they finally made. I think folks in EA-land tend to err in this opposite direction, and I find myself giving similar sorts of counsel on these themes to those who (perhaps unwisely) seek my advice. So I write.
Certainty, resilience, and value of (accessible) information
It is widely appreciated that we can be more certain (or confident) of some things than others: I can be essentially certain I have two eyes, whilst guessing with little confidence whether it will rain tomorrow. We can often (and should even oftener) use numbers and probability to express different levels of certainty—e.g. P(I have two eyes) ~ 1 (- 10^-(lots)); P(rain tomorrow) ~ 0.5.
Less widely appreciated is the idea that our beliefs can vary not only in their certainty but how much we expect this certainty to change. This ‘second-order certainty’ sometimes goes under the heading ‘credal (¬)fragility’ or credal resilience. These can come apart, especially in cases of uncertainty. [1] I might think the chances of ‘The next coin I flip will land heads’ and ‘Rain tomorrow’ are 50⁄50, but I’d be much more surprised if my confidence in the former changed to 90% than the latter: coins tend fair, whilst real weather forecasts I consult often surprise my own guesswork, and prove much more reliable. For coins, I have resilient uncertainty; for the weather, non-resilient uncertainty.
Credal resilience is—roughly—a forecast of the range or spread of our future credences. So when applied to ourselves, it is partly a measure of what resources we will access to improve our guesswork. My great-great-great-grandfather, sans access to good meteorology, probably had much more resilient uncertainty about tomorrow’s weather. However the ‘resource’ need not be more information, sometimes it is simply more ‘thinking time’: a ‘snap judgement’ I make on a complex matter may also be one I expect to shift markedly if I take time to think it through more carefully, even if I only spend that time trying to better weigh information I already possess.
Thus credal resilience is one part of value of information (or value of contemplation): all else equal, information (or contemplation) applied to a less resilient belief is more valuable than a more resilient one, as there’s a higher expected ‘yield’ in terms of credences changing (hopefully/expectedly for the better). Of course, all else is seldom equal; accuracy on some beliefs can be much more important than others: if I was about to ‘call the toss’ for a high profile sporting event, eking out even a miniscule ‘edge’ might be worth much more than the much easier to obtain, and much more informative, weather forecast for tomorrow.
Decision making under (resilient?) uncertainty
Most ‘real life’ decision-making under uncertainty (so most real life decision-making in general) has the hidden bonus option of ‘postpone the decision to think about your options more’. When is taking the bonus option actually a bonus, rather than a penalty?
Simply put: “Go with your best guess when you’re confident enough” is the wrong answer, and “Go with your best guess when you’re resilient enough” is the right one.
The first approach often works, and its motivation is understandable. For low-stakes decisions (“should I get this or that for dinner?”), I might be happy only being 60% confident I am taking the better option. For many high-stakes decisions (e.g. “what direction should I take my career for the next several years?”) I’d want to be at least 90% sure I’m making the right call.
But you can’t always get what you want, and the world offers no warranty you can make all (or any) of your important decisions with satisfying confidence. Thus this approach runs aground in cases of resilient uncertainty: you want to be 90% sure, but you are 60% sure, and try as you might you stay 60% sure. Yet you keep trying in the increasingly forlorn hope the next attempt will excavate the crucial nugget of information which makes the decision clear.
Deciding once you are resilient enough avoids getting stuck in this way. In this approach, the ‘decision to decide’ is not based on achieving some aspirational level of certainty, but comparing the marginal benefits of better decision accuracy versus earlier decision execution. If my credence for “I should—all things considered [2]- go to medical school” is 0.6, but I thought it would change by 0.2 on average if I spent a gap year to think it over (e.g. a distribution of Beta(3,2)), maybe waiting a year makes sense: there’s maybe a 30% chance I would think medical school wasn’t the right choice after all, and averting that risk may be worth more than the 70% chance further reflection gets the same result and I delay the decision for a year.
Suppose I take that year, and now my credence is 0.58, but I think this would only change by +/- 0.05 if I took another year out (e.g. Beta(55, 40)): my deliberation made me more uncertain, but this uncertainty is much more resilient. Taking another gap year to deliberate looks less reasonable: although I think there’s a 42% chance this decision is the wrong call, there’s only a 6% chance I would change my mind with another year of agonising. My best guess is resilient enough; I should take the plunge.[3]
Track records are handy in practically assessing credal resilience
This is all easier said than done. One challenge is applying hard numbers to a felt sense of uncertainty is hard. Going further and (like the toy example) applying hard numbers to forecast the distribution of what this felt sense of uncertainty would be after some further hypothesised information/contemplation to assess credal resilience looks nigh-impossible. “I’m not sure, then I thought about it for a bit, now I’m still not sure. What now?”
However, this sort of ‘auto-epistemic-forecasting’, like geopolitical forecasting, is not as hard as it first seems. In the latter, base-rates, reference classes, and trend extrapolation are often enough to at least get one in the right ballpark. In a similar way, tracking one’s credal volatility over previous deliberation can give you a good idea about the resilience of one’s uncertainty, and the value of further deliberation.
Imagine a graph which has ‘Confidence I should do X’ on the Y-axis (sorry), and some metric of amount of deliberation (e.g. hours, number of people you’ve called for advice, pages in your option-assessment google doc) on the X-axis. What might this graph look like?
The easy case is where one can see confidence (red line) trend in a direction with volatility (blue dashed envelope) steadily decreasing. This suggests one’s belief is already high confidence and high-resilience, and—extrapolating forward—spending a lot more time deliberating is unlikely to change this picture much.
Per previous remarks, the same applies even if deliberation resolves to a lesser degree of confidence. This suggests resilience and maybe decision time—as yet further deliberation can be expected to result in yet more minor perturbations around your (uncertain) best guess.
Alternatively, if the blue lines haven’t converged much (but are still converging), this suggests non-resilient uncertainty, and further deliberation can reasonably be expected to give further resolution, and this might be worth the further cognitive investment.
The most difficult cases are ‘stable (or worsening) vacillation’. After perhaps some initial progress, one finds one’s confidence keeps hopping up and down in a highly volatile way, and this volatility does not appear to be settling despite more and more time deliberating: one morning you’re 70% confident, that evening 40%, next week 55%, and so on.
Given the post’s recommendation (in graphical terms) is ‘look at how quickly the blue lines come together’, probably the best option here is to decide when it is clear they are no longer converging. The returns to deliberation seem to have capped out, and although your best guess now is unlikely to be the same as your best guess at time t+n, it is also unlikely to be worse in expectation either.
Naturally, few of us steadily track our credences re. some proposition over time as we deliberate further upon it (perhaps a practice worth attempting). Yet memory might be good enough for us to see what kind of picture applies. If I recall “I thought I should go to medical school, and after each ‘thing’ I did to inform this decision (e.g. do some research, shadow a doctor, work experience in a hospital) this impression steadily resolved”, perhaps it is the first or second graph. If I recall instead “I was certain I should become a doctor, but then I thought I should not because of something I read on the internet, but then I talked to my friend who still thought I should, and now I don’t know”, perhaps it is more the third graph.
This approach has other benefits. It can act as a check on the amounts of deliberation we are amassing, both in general and with respect to particular deliberative activities. In a general sense, we can benchmark how long quasi-idealized versions ourselves would mull over a decision (at least bounded to some loose order of magnitude—e.g. “more than a hour, but less than a month”). Exceeding these bounds in either direction can be tripwire to consider we are being too reckless or diffident. It can also be applied to help gauge the value of particular types of information or deliberation: if my view on taking a job changed markedly on talking to an employee, maybe it would be worth me talking to a couple more; if my decision has stayed at mediocre % as my deliberation google doc increased from 10 pages to 30, maybe not much is likely to change if I add another 20 pages.
Irresilient consolation
Although the practical challenges of deploying resilience to decision-making are not easy, I don’t think it is the main problem. In my experience, folks spending too long deliberating on a decision with resilient uncertainty are not happily investigating their dilemma, looking forward to figuring it out, yet blissfully unaware that their efforts are unlikely to pay off and they would be better off just going with their best guess. Rather, their dilemma causes them unhappiness and anxiety, they find further deliberation aversive and stressful, and they often have some insight this agonised deliberation is at least sub-optimal, if not counter-productive, if not futile. You at-least-kind-of know you should just decide, but resort to festinating (and festering) rumination to avoid confronting the uncertain dangers of decision.
One of the commoner tragedies of the human condition is that insight is not sufficient for cure. I may know that airliners are extremely safe means of travel, that I plausibly took greater risks driving to the airport than on the flight itself, yet nonetheless check the wings haven’t fallen off after every grumble of turbulence. Yet although ‘knowing is less than half the battle’, it remains a useful ally—in the same way cognitive-behavioural therapy is more than, but also partly, ‘getting people to argue themselves out of their depression and anxiety’.
The stuff above is my attempt to provide one such argument. There are others I’d often be minded to offer too. In ascending order of generality and defensibility.
“It’s not that big a deal”: Folks often overestimate the importance and reversibility of their decisions: in the same way the anticipated impact of an adverse life event on wellbeing tends to be greater than its observed impact, the costs of making the wrong decision may be inflated. Of my medical school cohort, several became disenchanted with medicine at various points, often after committing multiple years to the vocation; others simply failed the course and had to leave. None of this is ideal, but it was seldom (~never) a disaster that soured their life forevermore.
Similarly, big decisions can often be (largely) unwound. Typical ‘exits’ from medical school are (if very early) leaving to re-apply to university; (if after a couple of years) converting their study into a quasi-improvised bachelor’s in biological science; (if later) graduating as ‘doctor’ but working in another career afterwards. This is again not ideal: facially, spending multiple years preparing for a career one will not go on to practice is unlikely to be optimal—but the expected costliness is not astronomical.
Although I often say this, I don’t always. I think typical ‘EA dilemmas’ like “Should I take this job or that one?” “What should I major in?” are lower-stakes than the medical school examples above,[4] but that doesn’t mean all are. The world also offers no warranty that you’ll never face a decision where picking the wrong option is indeed hugely costly and very difficult to reverse. Doctrinal assertions otherwise are untrustworthy ‘therapy’ and unwise advice.
“Suck it and see”: trying an option can have much higher value of information than further deliberation. Experience is not only the teacher of fools: sometimes one has essentially exhausted all means to inform a decision in advance, leaving committing to one option and seeing how it goes the only means left. Another way of looking at it: it might be more time efficient to trial doing an option for X months rather than deliberating on whether to take it for X + n months. [5]
Obviously this consideration doesn’t apply for highly irreversible choices. But for choices across most of the irreversibility continuum, it can weigh in favour (and my impression is the “You also get information (and maybe the best available remaining information) from trying something directly, not just further deliberation” consideration is often neglected).
Another caveat is when the option being trialled is some sort of co-operative endeavour: others might hope to rely on your commitment, so may not be thrilled at (e.g.) a 40% risk you’ll back out after a couple of months. However, if you’re on the same team, this seems apt for transparent negotiation. Maybe they’re happy to accept this risk, or maybe there’s a mutually beneficial deal between you which can be struck: e.g. you commit to this option for at least Y months so even if you realise it is not for you earlier you will stick ‘stick it out’ a while to give them time to plan and mitigate for you backing out. [6]
“Inadequate amounts ventured, suboptimal amounts gained (in expectation)”: optimal strategy is reconciled to some exposure to the risks of mistake. As life is a risky prospect, you can seldom guarantee for yourself the best outcome: morons can get lucky, and few plans, no matter how wise, are proof from getting wrecked by sufficiently capricious fortune. You also can’t promise yourself to always adopt the optimal ex ante strategy either, but this is a much more realistic and prudentially worthwhile ambition.
One element of an ‘ideal strategy’ is an ideal level of risk tolerance. Typically, some available risk reductions are costly, so there comes a point where accepting the remaining risk can be better than further attempts to eliminate it—cf. ‘Umeshisms’ like “If you never miss a flight, you are spending too much time at airports”.
Although the ideal level of risk tolerance varies (cf. “If you’re never in a road traffic accident, you’re driving too cautiously”) it is ~always some, and for altruistic benefit the optimal level of (pure) risk aversion is typically ~zero. Thus optimal policy will calculate many substantial risks as nonetheless worth taking.
This can suck (worse, can be expected to suck in advance), especially in longtermist-land where the impact usually has some conjuncts which are “very uncertain, but plausibly very low probability”: maybe the best thing for you to do is devote yourself to some low probability hedge or insurance, where the likeliest ex post outcome was “this didn’t work” or (perhaps worse from the point of view of your own reflection) “this all was just tilting at windmills”.
Perhaps one can reconcile oneself to the point of view of the universe: it matters what rather than who, so best complementing the pre-existing portfolio is the better metric to ‘keep score’; or perhaps one could remember that one really should hope the risks one devotes oneself to are illusory (and ideally, all of them are); or perhaps one should get over it: “angst over inadequate self-actualization” is neither an important cause area nor an important decision criterion. But ultimately, you should do the right thing, even if it sucks.
High levels of certainty imply relatively high levels of credal resilience: the former places lower bounds on the latter. To motivate: suppose I was 99% sure of rain on Friday. Standard doctrine is my current confidence should also be the expected value of my future confidence. So in the same way 99% confidence is inconsistent with ‘I think there’s a 10% chance I will believe in on saturday P(rain friday) = 0% (i.e. actually 10% chance it doesn’t rain after all), it is also inconsistent with ’I think there’s a 20% chance in an hour I will be 60% confident it will rain on Friday. Even if the rest of my probability mass was at 100%, this nets out to 92% overall (0.8*1 + 0.2*0.6).
One family of caveats (mentioned here for want of a better location) are around externalities. Maybe my own resilient confidence is not always sufficient to justify choices I make that could adversely affect others. Likewise a policy of action in these cases may go wrong when decision-makers are poorly calibrated and do not consult with one another enough (cf. unilateralists curse).
I do not think this changes the main points, which apply to something like the resilience of one’s ‘reasonable, all things considered’ credence (including peer disagreement, option value, and sundry other considerations). Insofar as folks tend to err in the direction of being headstrong because they neglect their propensity to be overconfident, or fail to attend to their impact on others, it is worth stressing these as general reminders.
A related piece of advice I often give is to run ‘trial’ and ‘evaluation’ in series rather than in parallel. So, rather than commit to try X, but spend your time during this trial simultaneously agonising (with the ‘benefit’ of up-to-the-minute signals from how things are going so far) about whether you made the right choice, commit instead to try X for n months, and postpone evaluation (and the decision to keep going, stop, or whatever else) to time set aside afterwards.
I guess the optimal policy of ‘flakiness’ versus ‘stickiness’ is very context dependent. I can think of some people who kept ‘job hopping’ rapidly who I suspect would have been better served if they stuck with each option a while longer (leave alone the likely ‘transaction costs’ on their colleagues). However, surveying my anecdata I think I see more often folks sticking around too long in suboptimal (or just bad) activity.
One indirect indicator is I hear too frequently misguided reasons for sticking around like “I don’t want to make my boss sad by leaving”, or “I don’t want to make life difficult for my colleagues filling in for me and finding a replacement”: employment contracts are not marriage vows, and your obligations here are (rightly) fully discharged by giving fair notice and helping transition/handover.
Terminate deliberation based on resilience, not certainty
BLUF:
“We should ponder until it no longer feels right to ponder, and then to choose one of the acts it feels most right to choose… If pondering comes at a cost, we should ponder only if it seems we will be able to separate better options from worse options quickly enough to warrant the pondering” - Trammell
Introduction
Many choices are both highly uncertain, highly consequential, and difficult to develop a highly-confident impression of which option is best. In EA-land, the archetypal example is career choice, but similar dilemmas are common in corporate decision-making (e.g. grant-making, organisational strategy) and life generally (e.g. “Should I marry Alice?” “Should I have children with Bob?”).
Thinking carefully about these choices is wise, and my impression is people tend to err in the direction of too little contemplation—stumbling over important thresholds that set the stage for the rest of their lives. One of the (perhaps the main) key messages of effective altruism is our altruistic efforts err in a similar direction. Pace more cynical explanations, I think (e.g.) the typical charitable donor (or typical person) is “using reason to try and do good”. Yet they use reason too little and decide too soon whilst the ‘returns to (further) reason’ remain high, and so typically fall far short of what they could have accomplished.
One can still have too much of a good thing: ‘reasonableness’ ‘prudence’ or even ‘caution’ can be wise, but ‘indecisiveness’ less so. I suspect many can recall occasions of “analysis paralysis”, or even when they suspect prolonged fretting worsened the quality of the decision they finally made. I think folks in EA-land tend to err in this opposite direction, and I find myself giving similar sorts of counsel on these themes to those who (perhaps unwisely) seek my advice. So I write.
Certainty, resilience, and value of (accessible) information
It is widely appreciated that we can be more certain (or confident) of some things than others: I can be essentially certain I have two eyes, whilst guessing with little confidence whether it will rain tomorrow. We can often (and should even oftener) use numbers and probability to express different levels of certainty—e.g. P(I have two eyes) ~ 1 (- 10^-(lots)); P(rain tomorrow) ~ 0.5.
Less widely appreciated is the idea that our beliefs can vary not only in their certainty but how much we expect this certainty to change. This ‘second-order certainty’ sometimes goes under the heading ‘credal (¬)fragility’ or credal resilience. These can come apart, especially in cases of uncertainty. [1] I might think the chances of ‘The next coin I flip will land heads’ and ‘Rain tomorrow’ are 50⁄50, but I’d be much more surprised if my confidence in the former changed to 90% than the latter: coins tend fair, whilst real weather forecasts I consult often surprise my own guesswork, and prove much more reliable. For coins, I have resilient uncertainty; for the weather, non-resilient uncertainty.
Credal resilience is—roughly—a forecast of the range or spread of our future credences. So when applied to ourselves, it is partly a measure of what resources we will access to improve our guesswork. My great-great-great-grandfather, sans access to good meteorology, probably had much more resilient uncertainty about tomorrow’s weather. However the ‘resource’ need not be more information, sometimes it is simply more ‘thinking time’: a ‘snap judgement’ I make on a complex matter may also be one I expect to shift markedly if I take time to think it through more carefully, even if I only spend that time trying to better weigh information I already possess.
Thus credal resilience is one part of value of information (or value of contemplation): all else equal, information (or contemplation) applied to a less resilient belief is more valuable than a more resilient one, as there’s a higher expected ‘yield’ in terms of credences changing (hopefully/expectedly for the better). Of course, all else is seldom equal; accuracy on some beliefs can be much more important than others: if I was about to ‘call the toss’ for a high profile sporting event, eking out even a miniscule ‘edge’ might be worth much more than the much easier to obtain, and much more informative, weather forecast for tomorrow.
Decision making under (resilient?) uncertainty
Most ‘real life’ decision-making under uncertainty (so most real life decision-making in general) has the hidden bonus option of ‘postpone the decision to think about your options more’. When is taking the bonus option actually a bonus, rather than a penalty?
Simply put: “Go with your best guess when you’re confident enough” is the wrong answer, and “Go with your best guess when you’re resilient enough” is the right one.
The first approach often works, and its motivation is understandable. For low-stakes decisions (“should I get this or that for dinner?”), I might be happy only being 60% confident I am taking the better option. For many high-stakes decisions (e.g. “what direction should I take my career for the next several years?”) I’d want to be at least 90% sure I’m making the right call.
But you can’t always get what you want, and the world offers no warranty you can make all (or any) of your important decisions with satisfying confidence. Thus this approach runs aground in cases of resilient uncertainty: you want to be 90% sure, but you are 60% sure, and try as you might you stay 60% sure. Yet you keep trying in the increasingly forlorn hope the next attempt will excavate the crucial nugget of information which makes the decision clear.
Deciding once you are resilient enough avoids getting stuck in this way. In this approach, the ‘decision to decide’ is not based on achieving some aspirational level of certainty, but comparing the marginal benefits of better decision accuracy versus earlier decision execution. If my credence for “I should—all things considered [2]- go to medical school” is 0.6, but I thought it would change by 0.2 on average if I spent a gap year to think it over (e.g. a distribution of Beta(3,2)), maybe waiting a year makes sense: there’s maybe a 30% chance I would think medical school wasn’t the right choice after all, and averting that risk may be worth more than the 70% chance further reflection gets the same result and I delay the decision for a year.
Suppose I take that year, and now my credence is 0.58, but I think this would only change by +/- 0.05 if I took another year out (e.g. Beta(55, 40)): my deliberation made me more uncertain, but this uncertainty is much more resilient. Taking another gap year to deliberate looks less reasonable: although I think there’s a 42% chance this decision is the wrong call, there’s only a 6% chance I would change my mind with another year of agonising. My best guess is resilient enough; I should take the plunge.[3]
Track records are handy in practically assessing credal resilience
This is all easier said than done. One challenge is applying hard numbers to a felt sense of uncertainty is hard. Going further and (like the toy example) applying hard numbers to forecast the distribution of what this felt sense of uncertainty would be after some further hypothesised information/contemplation to assess credal resilience looks nigh-impossible. “I’m not sure, then I thought about it for a bit, now I’m still not sure. What now?”
However, this sort of ‘auto-epistemic-forecasting’, like geopolitical forecasting, is not as hard as it first seems. In the latter, base-rates, reference classes, and trend extrapolation are often enough to at least get one in the right ballpark. In a similar way, tracking one’s credal volatility over previous deliberation can give you a good idea about the resilience of one’s uncertainty, and the value of further deliberation.
Imagine a graph which has ‘Confidence I should do X’ on the Y-axis (sorry), and some metric of amount of deliberation (e.g. hours, number of people you’ve called for advice, pages in your option-assessment google doc) on the X-axis. What might this graph look like?
The easy case is where one can see confidence (red line) trend in a direction with volatility (blue dashed envelope) steadily decreasing. This suggests one’s belief is already high confidence and high-resilience, and—extrapolating forward—spending a lot more time deliberating is unlikely to change this picture much.
Per previous remarks, the same applies even if deliberation resolves to a lesser degree of confidence. This suggests resilience and maybe decision time—as yet further deliberation can be expected to result in yet more minor perturbations around your (uncertain) best guess.
Alternatively, if the blue lines haven’t converged much (but are still converging), this suggests non-resilient uncertainty, and further deliberation can reasonably be expected to give further resolution, and this might be worth the further cognitive investment.
The most difficult cases are ‘stable (or worsening) vacillation’. After perhaps some initial progress, one finds one’s confidence keeps hopping up and down in a highly volatile way, and this volatility does not appear to be settling despite more and more time deliberating: one morning you’re 70% confident, that evening 40%, next week 55%, and so on.
Given the post’s recommendation (in graphical terms) is ‘look at how quickly the blue lines come together’, probably the best option here is to decide when it is clear they are no longer converging. The returns to deliberation seem to have capped out, and although your best guess now is unlikely to be the same as your best guess at time t+n, it is also unlikely to be worse in expectation either.
Naturally, few of us steadily track our credences re. some proposition over time as we deliberate further upon it (perhaps a practice worth attempting). Yet memory might be good enough for us to see what kind of picture applies. If I recall “I thought I should go to medical school, and after each ‘thing’ I did to inform this decision (e.g. do some research, shadow a doctor, work experience in a hospital) this impression steadily resolved”, perhaps it is the first or second graph. If I recall instead “I was certain I should become a doctor, but then I thought I should not because of something I read on the internet, but then I talked to my friend who still thought I should, and now I don’t know”, perhaps it is more the third graph.
This approach has other benefits. It can act as a check on the amounts of deliberation we are amassing, both in general and with respect to particular deliberative activities. In a general sense, we can benchmark how long quasi-idealized versions ourselves would mull over a decision (at least bounded to some loose order of magnitude—e.g. “more than a hour, but less than a month”). Exceeding these bounds in either direction can be tripwire to consider we are being too reckless or diffident. It can also be applied to help gauge the value of particular types of information or deliberation: if my view on taking a job changed markedly on talking to an employee, maybe it would be worth me talking to a couple more; if my decision has stayed at mediocre % as my deliberation google doc increased from 10 pages to 30, maybe not much is likely to change if I add another 20 pages.
Irresilient consolation
Although the practical challenges of deploying resilience to decision-making are not easy, I don’t think it is the main problem. In my experience, folks spending too long deliberating on a decision with resilient uncertainty are not happily investigating their dilemma, looking forward to figuring it out, yet blissfully unaware that their efforts are unlikely to pay off and they would be better off just going with their best guess. Rather, their dilemma causes them unhappiness and anxiety, they find further deliberation aversive and stressful, and they often have some insight this agonised deliberation is at least sub-optimal, if not counter-productive, if not futile. You at-least-kind-of know you should just decide, but resort to festinating (and festering) rumination to avoid confronting the uncertain dangers of decision.
One of the commoner tragedies of the human condition is that insight is not sufficient for cure. I may know that airliners are extremely safe means of travel, that I plausibly took greater risks driving to the airport than on the flight itself, yet nonetheless check the wings haven’t fallen off after every grumble of turbulence. Yet although ‘knowing is less than half the battle’, it remains a useful ally—in the same way cognitive-behavioural therapy is more than, but also partly, ‘getting people to argue themselves out of their depression and anxiety’.
The stuff above is my attempt to provide one such argument. There are others I’d often be minded to offer too. In ascending order of generality and defensibility.
“It’s not that big a deal”: Folks often overestimate the importance and reversibility of their decisions: in the same way the anticipated impact of an adverse life event on wellbeing tends to be greater than its observed impact, the costs of making the wrong decision may be inflated. Of my medical school cohort, several became disenchanted with medicine at various points, often after committing multiple years to the vocation; others simply failed the course and had to leave. None of this is ideal, but it was seldom (~never) a disaster that soured their life forevermore.
Similarly, big decisions can often be (largely) unwound. Typical ‘exits’ from medical school are (if very early) leaving to re-apply to university; (if after a couple of years) converting their study into a quasi-improvised bachelor’s in biological science; (if later) graduating as ‘doctor’ but working in another career afterwards. This is again not ideal: facially, spending multiple years preparing for a career one will not go on to practice is unlikely to be optimal—but the expected costliness is not astronomical.
Although I often say this, I don’t always. I think typical ‘EA dilemmas’ like “Should I take this job or that one?” “What should I major in?” are lower-stakes than the medical school examples above,[4] but that doesn’t mean all are. The world also offers no warranty that you’ll never face a decision where picking the wrong option is indeed hugely costly and very difficult to reverse. Doctrinal assertions otherwise are untrustworthy ‘therapy’ and unwise advice.
“Suck it and see”: trying an option can have much higher value of information than further deliberation. Experience is not only the teacher of fools: sometimes one has essentially exhausted all means to inform a decision in advance, leaving committing to one option and seeing how it goes the only means left. Another way of looking at it: it might be more time efficient to trial doing an option for X months rather than deliberating on whether to take it for X + n months. [5]
Obviously this consideration doesn’t apply for highly irreversible choices. But for choices across most of the irreversibility continuum, it can weigh in favour (and my impression is the “You also get information (and maybe the best available remaining information) from trying something directly, not just further deliberation” consideration is often neglected).
Another caveat is when the option being trialled is some sort of co-operative endeavour: others might hope to rely on your commitment, so may not be thrilled at (e.g.) a 40% risk you’ll back out after a couple of months. However, if you’re on the same team, this seems apt for transparent negotiation. Maybe they’re happy to accept this risk, or maybe there’s a mutually beneficial deal between you which can be struck: e.g. you commit to this option for at least Y months so even if you realise it is not for you earlier you will stick ‘stick it out’ a while to give them time to plan and mitigate for you backing out. [6]
“Inadequate amounts ventured, suboptimal amounts gained (in expectation)”: optimal strategy is reconciled to some exposure to the risks of mistake. As life is a risky prospect, you can seldom guarantee for yourself the best outcome: morons can get lucky, and few plans, no matter how wise, are proof from getting wrecked by sufficiently capricious fortune. You also can’t promise yourself to always adopt the optimal ex ante strategy either, but this is a much more realistic and prudentially worthwhile ambition.
One element of an ‘ideal strategy’ is an ideal level of risk tolerance. Typically, some available risk reductions are costly, so there comes a point where accepting the remaining risk can be better than further attempts to eliminate it—cf. ‘Umeshisms’ like “If you never miss a flight, you are spending too much time at airports”.
Although the ideal level of risk tolerance varies (cf. “If you’re never in a road traffic accident, you’re driving too cautiously”) it is ~always some, and for altruistic benefit the optimal level of (pure) risk aversion is typically ~zero. Thus optimal policy will calculate many substantial risks as nonetheless worth taking.
This can suck (worse, can be expected to suck in advance), especially in longtermist-land where the impact usually has some conjuncts which are “very uncertain, but plausibly very low probability”: maybe the best thing for you to do is devote yourself to some low probability hedge or insurance, where the likeliest ex post outcome was “this didn’t work” or (perhaps worse from the point of view of your own reflection) “this all was just tilting at windmills”.
Perhaps one can reconcile oneself to the point of view of the universe: it matters what rather than who, so best complementing the pre-existing portfolio is the better metric to ‘keep score’; or perhaps one could remember that one really should hope the risks one devotes oneself to are illusory (and ideally, all of them are); or perhaps one should get over it: “angst over inadequate self-actualization” is neither an important cause area nor an important decision criterion. But ultimately, you should do the right thing, even if it sucks.
High levels of certainty imply relatively high levels of credal resilience: the former places lower bounds on the latter. To motivate: suppose I was 99% sure of rain on Friday. Standard doctrine is my current confidence should also be the expected value of my future confidence. So in the same way 99% confidence is inconsistent with ‘I think there’s a 10% chance I will believe in on saturday P(rain friday) = 0% (i.e. actually 10% chance it doesn’t rain after all), it is also inconsistent with ’I think there’s a 20% chance in an hour I will be 60% confident it will rain on Friday. Even if the rest of my probability mass was at 100%, this nets out to 92% overall (0.8*1 + 0.2*0.6).
Including, for example, all the (many) other options I have besides going to medical school.
One family of caveats (mentioned here for want of a better location) are around externalities. Maybe my own resilient confidence is not always sufficient to justify choices I make that could adversely affect others. Likewise a policy of action in these cases may go wrong when decision-makers are poorly calibrated and do not consult with one another enough (cf. unilateralists curse).
I do not think this changes the main points, which apply to something like the resilience of one’s ‘reasonable, all things considered’ credence (including peer disagreement, option value, and sundry other considerations). Insofar as folks tend to err in the direction of being headstrong because they neglect their propensity to be overconfident, or fail to attend to their impact on others, it is worth stressing these as general reminders.
In terms of multiplier stacking (q.v.), if not absolute face value (but the former matters more here).
A related piece of advice I often give is to run ‘trial’ and ‘evaluation’ in series rather than in parallel. So, rather than commit to try X, but spend your time during this trial simultaneously agonising (with the ‘benefit’ of up-to-the-minute signals from how things are going so far) about whether you made the right choice, commit instead to try X for n months, and postpone evaluation (and the decision to keep going, stop, or whatever else) to time set aside afterwards.
I guess the optimal policy of ‘flakiness’ versus ‘stickiness’ is very context dependent. I can think of some people who kept ‘job hopping’ rapidly who I suspect would have been better served if they stuck with each option a while longer (leave alone the likely ‘transaction costs’ on their colleagues). However, surveying my anecdata I think I see more often folks sticking around too long in suboptimal (or just bad) activity.
One indirect indicator is I hear too frequently misguided reasons for sticking around like “I don’t want to make my boss sad by leaving”, or “I don’t want to make life difficult for my colleagues filling in for me and finding a replacement”: employment contracts are not marriage vows, and your obligations here are (rightly) fully discharged by giving fair notice and helping transition/handover.