One’s Future Behavior as a Domain of Calibration

Summary: This post focuses on one particular domain of forecasting, which is one’s own behavior. I personally find it very useful to make frequent (e.g. weekly) predictions on how likely it is for me to get a list of planned tasks done in time. Achieving calibration this way allows for more accurate planning and reliability. It can be helpful in the planning process by making obvious which tasks/​goals require further refinements due to a low probability of getting done. Consider this post a nudge to explore this direction of “self calibration” to figure out whether the proposed approach is valuable for you.

On Calibration and Resolution

(feel free to skip this section if you’re familiar with the terms, e.g. due to having read Superforecasting)

Calibration describes the alignment of a person’s predictions with reality in any given domain. If a person is well calibrated (in a domain), that means that of all their predictions (in that domain) stating something is e.g. 60% likely to happen, 60% actually come true. This alignment between predictions and reality makes their predictions actually useful for decision making. People who aren’t calibrated (in a domain) don’t have this reliable correlation between forecasts and reality, which means their predictions are systematically incorrect, thus less reliable and may consequently lead to suboptimal decisions more often than necessary.

Another metric to measure the performance of a forecaster is what Philip Tetlock calls resolution, which could be described as the boldness of one’s predictions: High resolution means your predictions are close to 0 or 100%, whereas low resolution means many of your predictions are closer to 50%.

An ideal forecaster would have great calibration as well as resolution. Both can generally be productively improved, the former with calibration training, the latter e.g. with gaining expertise or gathering more information. Calibration can, in principle, get very close to perfection. This is probably not the case for resolution (unless being dishonest or extremely overconfident) due to inherent uncertainty of the future.

Calibration training describes the process of deliberately making (and afterwards evaluating) many predictions within any particular domain, calibrating oneself in the process. While making well calibrated predictions may involve skills such as finding good reference classes and applying Bayesian updating, one highly relevant subskill seems to be mapping one’s feeling of certainty about something onto a concrete (numeric) probability. This is especially true for the domain discussed in this post. Improvements in calibration come relatively quickly and mere dozens of evaluated predictions can be enough to make measureable progress.

Forecasting Domains

The previous section was rather reliant on the term “domain”. The concept of forecasting domains is important, because neither calibration nor resolution transfer well between domains. For resolution this is relatively obvious: Expertise in one area – say US politics – does not grant you expertise in others, such as start-up performance, climate science or replications of studies in social science. With respect to calibration it seems somewhat more plausible that e.g. getting rid of one’s general overconfidence may yield benefits in many domains and not be limited to a single one. This does seem to only get you so far however, and calibration too needs to be trained separately in different domains to get the best possible results[1].

Most “classic” forecasting domains are external ones about things that happen in the outside world, which usually are not directly affected by the forecaster. This post however is about a very particular domain: your personal future behavior. This domain differs from typical domains in several ways:

  1. The forecaster determines both the predictions and their outcome; there is thus some risk of the predictions affecting the outcome, which may be undesired (unless they affect the outcome in a positive way, in which case the whole ordeal may be epistemically questionable but still instrumentally useful)

  2. Evaluation is very easy, as the forecaster directly observes their own behavior, so no research of any kind is involved

  3. It’s easy to find a lot of predictable events and thus calibrate quite quickly

  4. It’s a very personal domain where in most cases it makes very little sense to involve other forecasters

  5. It’s more important than in other domains to actively generate new prediction statements (whereas in other domains one could in principle rely on pre-existent statements and merely generate probabilities[2])

The first point will be addressed in a later section, but points 2 and 3 draw a positive picture of this particular domain as one that allows relatively easy calibration training. I also argue that it’s not only relatively easy, but also exceptionally useful, for a number of reasons:

  • It’s useful for planning, as you can attain detailed insight into what you’re likely to do and how modifications to your plans impact the expected outcomes

  • It allows people to become more reliable in their communication towards others

  • It’s a very robust domain which will always be useful for a person, independent of any changes in interests or career

However, I rarely see this domain mentioned when people discuss forecasting. It therefore appears quite underrated to me.

Predicting One’s Future Behavior

In order to achieve calibration quickly and easily, one should find ways to

  1. accumulate many predictions quickly,

  2. that are easy to formulate,

  3. easy to evaluate,

  4. and have a short time horizon.

While I’ve described my personal system in detail in the comments below this post, I’ll try to keep the post itself more general. So, to people who would like to invest a bit of their time and energy into calibration training in this domain, I’d suggest the following:

  1. Pick a recurring time. This could be a day of the week, every morning, the first of each month. At these times spend a few minutes on calibration training by evaluating prior predictions as well as making new predictions on concrete aspects of your behavior within a certain timeframe

    1. Making this a recurring thing is useful to ensure calibration stays on a high level over time. While it is useful to do calibration training once and then “be calibrated”, I would assume it pays off to recurrently measure calibration, especially given that it can be done in a matter of minutes.

    2. If you already have a system for goal setting or task planning, I suggest simply appending this whole process to it.

  2. Before actually making new predictions, evaluate the predictions from your last session

    1. For every individual prediction, evaluate whether it turned out to be true or false (or in rare cases indecisive and needs to be dropped from the evaluation)

    2. After this, I recommend two metrics to check:

      1. Comparing your expected value from last session’s predictions (i.e. sum of all probabilities) to the number of predictions evaluated to true, to figure out a general under- or overconfidence in that time frame (example[3])

      2. Looking at percentage ranges for all predictions from e.g. the last few months, to identify potential more detailed patterns, such as “I tend to be underconfident in the 80-89% range”

  3. Based on the findings of the evaluation, you then make new predictions for the time until the next session, and try to adjust your predictions depending on your findings

    1. I prefer to make almost purely System 1 based predictions, i.e. very quick intuitive ones, without spending too much time on them.

    2. Which concrete things you predict is of course up to you; what I do is that this whole process is part of me planning my weeks, and I assign probabilities to each task on whether they’ll be crossed off the list by the end of the week or not. Alternatively you could predict how well you stick to your habits, how much you’ll sleep (given you’re tracking that), whether you’ll leave the house etc. Basically anything related to your behavior that seems relevant enough that you’d like to gain accurate views on it.

There are many ways to implement such a system. You could probably do it with https://​​predictionbook.com/​​, Predict, https://​​www.predictit.org/​​, https://​​www.foretold.io/​​ or any other such service, although I’m admittedly not sure how well evaluating subsets of your predictions, as recommended in point 2b above, is supported in all of them, if at all. In theory you could even do most of this on paper, but that would naturally make the evaluation difficult. My personal approach is using spreadsheets for this, as that makes it super quick to add predictions (no mouse clicks required, no page reloads, simply a table with one prediction per row containing name/​statement, probability and evaluation (true or false) in individual cells), while also allowing me to fully automate the evaluation parts to my liking.

Benefits

I see three primary benefits this form of calibration provides: Firstly, it allows me to predict how well certain changes to plans affect my behavior, and thus makes it easier for me to figure out effective ways to reach my goals. When setting a goal, I may realize that I’m only 30% likely to reach it before some deadline. I could then identify numerous ways to increase the likelihood of success (e.g. make the goal more attractive or punish failure, contact somebody who may be able to help me, buy some tool that’s helpful in making progress, put reminders in place to ensure I don’t forget working on it in time, etc.), and have a precise feeling about how big each adjustment’s impact on the overall probability is.

Secondly, I waste less time on failing projects, as it’s easier to anticipate whether I’ll be able to finish something or not.

Thirdly, it leads to being a more reliable person. When somebody asks me whether I’ll attend an event or get something done until next week, good calibration allows me to actually give accurate answers.

Another potential benefit is that this domain affords making many predictions frequently, while also blurring the lines between one’s personal life and forecasting (which otherwise may be seen as two very separate areas). It is also comparably easy/​forgiving/​rewarding, as you don’t require specific expertise of any kind to make good (high-resolution) predictions. Consequently it’s a good driver of habitualizing forecasting and may generally be a good domain for people to get into forecasting in the first place.

Caveats

There are a number of things to keep in mind. Namely, I can think of five caveats to all of this. I’ll also explain why I believe these caveats aren’t all that problematic or how they can be addressed.

  1. Predictions may affect outcomes: as hinted at before, this domain is special in that predictions can affect outcomes much more than in other domains. If good calibration feels more important to you than behaving optimally, then you might deceive yourself into not doing things merely because you assigned a low probability to doing them. Similarly, assigning high probabilities might increase your motivation to do those things. Consequently, this form of calibration may in some cases be epistemically questionable, but could still be instrumentally useful. I personally don’t feel like this seriously affects me, as reaching my goals /​ getting things done feels much more important to me than cheating my way to a better calibration score. I do find it useful though to keep my predictions hidden from myself until evaluation, to reduce the likelihood of this becoming an issue in the first place.

  2. High correlation between predictions: Predictions in this domain are more interdependent than in others. This is because many plausible happenings, such as falling ill, personal tragedies, or even just mood fluctuations, may affect all or most of the predictions at a given time similarly, which might mess up the evaluation[4]. This is definitely a drawback, but one that can be addressed in multiple ways:

    1. You could generally try to very broadly factor in all such effects in the probabilities (e.g. by assuming a, say, 4% chance of illness per week). This could still lead to misleading evaluations in the short term, but things should even out in the long term.

    2. When making the predictions, you may implicitly append a “given no unforeseen major disruptions intervene” kind of qualifier, ideally with a well defined “major disruption” criterion to mitigate any self deception.

    3. Or you do what I did and mostly ignore this issue at first, until one day it becomes relevant, and only then decide what, if anything, to do about it.

  3. Reliance on anecdotal evidence: I have only limited concrete evidence about how useful this really is, as I’m not aware of other people using similar systems, and can thus only report anecdotally, which is reason enough to be skeptical. I can say that I personally have a strong impression that the aforementioned benefits exist. Calibration scores quickly converged for me in this domain and have remained in a good state. I’ve been using prediction based planning for my goals for around 1.5 years now, and am reasonably sure it does lead to achieving more of my personal goals. It’s certainly possible these benefits don’t generalize well to other people. On the other hand however, the general case for calibration training is strong, so it’s a reasonable prior in any sufficiently valid domain to assume that calibration training might be effective there. It also seems very plausible to me that having a more accurate view of one’s future behavior is useful.

  4. Benefits don’t come for free: The question of course is not only whether or not this is useful, but whether it’s worth the cost involved, i.e. time and energy. Setting up such a system always takes some effort, which I assume stops most people from following through on this idea even if they do think it sounds convincing. In the long run however, the cost isn’t all that high. As pointed out in one of my comments below, I usually spend ~10 minutes per week on the prediction & evaluation part, which is enough to consistently keep me well calibrated.

  5. Individual predictions are not that valuable: When we think of forecasting, we usually think of interesting and relevant questions regarding e.g. politics or AGI timelines. For such questions, it’s easy to see how getting closer to the truth is valuable. This is not necessarily the case for “will I spend at least four hours this week writing that forum post” or “will I stick to my plan of getting groceries tomorrow”. The value derived from getting closer to the truth of such questions is comparably low. My response to this would be that the reduced effort going into each single prediction makes up for this. So while the value per prediction may indeed be lower, this does not hold for the value per time invested[5]. Additionally, there certainly are cases where accurately predicting one’s future behavior is highly valuable, such as when considering to pursue a difficult degree, to start a costly project with significant risk of failure, or even just to sign up for a gym membership. For such cases alone it may be worth having improved one’s self calibration.

Conclusion & Discussion

Many people know of calibration training and the value that lies in frequently making and evaluating predictions. Yet I’m not aware of anyone using such methods to gain a deeper self understanding. I believe that, while there are some caveats, this particular domain of forecasting is especially useful, and one that allows relatively fast and easy calibration training due to the forecaster being so fundamentally in control of things.

My hope is that this post causes a few people to consider this domain and possibly begin experimenting with it. Other welcome outcomes would be discussions on the drawbacks, benefits and the experiences (if any) other forecasters have with self calibration.

My hypothesis is that many people who aren’t active forecasters might agree with the key points of this post to some degree, but won’t take action as setting up such a system is somewhat effortful. The upfront effort combined with relatively indirect benefits that have merely anecdotal backing might, for many, not make this seem like a very good use of their time.

People who already have some experience in forecasting on the other hand might have the systems and habits in place, but disregard this domain due to its peculiarities mentioned earlier in this post (such as predictions affecting outcomes). Another explanation may be technical reasons, such as their preferred tools making it difficult to make many predictions in a short time, effectively nudging them to make fewer, more “high-stakes” predictions rather than many simple ones, which doesn’t work well with this domain.

I’d particularly like to spark some discussion regarding the following questions:

  1. Is this forecasting domain actually as promising as I make it out to be?

  2. Is my impression correct that few people actively work on “self calibration”, or is that actually somewhat widespread behavior of which I’ve been ignorant until now?

  3. In case people really do think of this as valuable in general, but too difficult/​effortful to actually apply in practice, what may be the best way to make this as easy as possible for others? Which forecasting tool would work best for this? Would a well designed spreadsheet template do the job?

Thanks for reading, and also thanks a lot to Leonard Dung and one anonymous friend for helping me with this post.


  1. ↩︎
  2. ↩︎

    e.g. I’m sure a lot of people participate on https://​​www.metaculus.com/​​ without ever having submitted a question

  3. ↩︎

    Assuming you have a ToDo list of four items, and your predictions of getting them done respectively are 95%, 40%, 86% and 63%, your EV would be 0.95+0.4+0.86+0.63 = 2.84. If upon evaluation you realize you only got one of the items done, you might assume to have been overconfident in your predictions, and correct future predictions downward a bit.

  4. ↩︎

    For instance imagine a perfectly calibrated person making 40 predictions for the next week. Under normal circumstances that should be a decent enough sample size for any noise to mostly cancel out. If the EV (i.e. the sum of all predicted probabilities) was 32.9, the number of predictions evaluated to true should usually end up close to that number. If, however, the person falls ill on Monday evening and then spends the rest of the week in bed, this will affect almost all the predictions for that week equally, and in the end only 5 instead of ~33 things might get done.

  5. ↩︎

    This argument only works however, if the forecaster uses a prediction software that makes it reasonably easy and frictionless to create many new predictions.