If you haven’t spent time on calibration training, I recommend it! Open Phil has a tool here: https://www.openphilanthropy.org/blog/new-web-app-calibration-training. Making good forecasts is a mix of ‘understand the topic you’re making a prediction about’ and ‘understand yourself well enough to interpret your own feelings of confidence’. Even if they mostly don’t have expertise in the topic they’re writing about, I think most people can become pretty well-calibrated with an hour or two of practice.
And that’s a valuable service in its own right, I think. It would be a major gift to the public even if the only take-away readers got from predictions at the end of articles were ‘wow, even though these articles sound confident, the claims almost always tend to be 50% or 60% probable according to the reporter; guess I should keep in mind these topics are complex and these articles are being banged out in a few hours rather than being the product of months of study, so of course things are going to end up being pretty uncertain’.
If you also know enough about a topic to make a calibrated 80% or 90% (or 99%!) prediction about it, that’s great. But one of the nice things about probabilities is just that they clarify what you’re saying—they can function like an epistemic status disclaimer that notes how uncertain you really are, even if it was hard to make your prose flow without sounding kinda confident in the midst of the article. Making probabilistic predictions doesn’t have to be framed as ‘here’s me using my amazing knowledge of the world to predict the future’; it can just be framed as an attempt to disambiguate what you were saying in the article.
Relatedly, in my experience ‘writing an article or blog post’ can have bad effects on my ability to reason about stuff. I want to say things that are relevant and congruent and that flow together nicely; but my actual thought process includes a bunch of zig-zagging and updating and sorting-through-thoughts-that-don’t-initially-make-perfect-crisp-sense. So focusing on the writing makes me focus less on my thought process, and it becomes tempting to for me confuse the writing process or written artifact for my thought process or beliefs.
You’ve spent a lot of time living and breathing EA/rationalist stuff, so I don’t know that I have any advice that will be useful to you. But if I were giving advice to a random reporter, I’d warn about the above phenomenon and say that this can lead to overconfidence when someone’s just getting started adding probabilistic forecasts to their blogging.
I think this calibration-and-reflection bug is important—it’s a bug in your ability to recognize what you believe, not just in your ability to communicate it—and I think it’s fixable with some practice, without having to do the superforecaster ‘sink lots of hours into getting expertise about every topic you predict’ thing.
(And I don’t know, maybe the journey to fixing this could be an interesting one that generates an article of its own? Maybe a thing that could be linked to at the bottom of posts to give context for readers who are confused about why the numbers are there and why they’re so low-confidence?)
I agree with these comments, and think the first one—“If you haven’t spent time on calibration training...”—makes especially useful points.
Readers of this thread may also be interesting in a previous post of mine on Potential downsides of using explicit probabilities. (Though be warned that the post is less concise and well-structured than I’d aim for nowadays.) I ultimately conclude that post by saying:
There are some real downsides that can occur in practice when actual humans use explicit probabilities (or explicit probabilistic models, or maximising expected utility)
But some downsides that have been suggested (particularly causing overconfidence and understating the value of information) might actually be more pronounced for approaches other than using explicit probabilities
Some downsides (particularly relating to the optimizer’s curse, anchoring, and reputational issues) may be more pronounced when the probabilities one has (or could have) are less trustworthy
Other downsides (particularly excluding one’s intuitive knowledge) may be more pronounced when the probabilities one has (or could have) are more trustworthy
Only one downside (reputational issues) seems to provide any argument for even acting as if there’s a binary risk-uncertainty distinction
And even in that case the argument is quite unclear, and wouldn’t suggest we should use the idea of such a distinction inour own thinking
(That quote and post is obviously is somewhat tangential to this thread, but also somewhat relevant. I lightly edited that quote to make it make more sense of of context.)
I will look at that OpenPhil thing! I did do a calibration exercise with GJP (and was, to my surprise, both quite good and underconfident!) but I’d love to have another go.
If you haven’t spent time on calibration training, I recommend it! Open Phil has a tool here: https://www.openphilanthropy.org/blog/new-web-app-calibration-training. Making good forecasts is a mix of ‘understand the topic you’re making a prediction about’ and ‘understand yourself well enough to interpret your own feelings of confidence’. Even if they mostly don’t have expertise in the topic they’re writing about, I think most people can become pretty well-calibrated with an hour or two of practice.
And that’s a valuable service in its own right, I think. It would be a major gift to the public even if the only take-away readers got from predictions at the end of articles were ‘wow, even though these articles sound confident, the claims almost always tend to be 50% or 60% probable according to the reporter; guess I should keep in mind these topics are complex and these articles are being banged out in a few hours rather than being the product of months of study, so of course things are going to end up being pretty uncertain’.
If you also know enough about a topic to make a calibrated 80% or 90% (or 99%!) prediction about it, that’s great. But one of the nice things about probabilities is just that they clarify what you’re saying—they can function like an epistemic status disclaimer that notes how uncertain you really are, even if it was hard to make your prose flow without sounding kinda confident in the midst of the article. Making probabilistic predictions doesn’t have to be framed as ‘here’s me using my amazing knowledge of the world to predict the future’; it can just be framed as an attempt to disambiguate what you were saying in the article.
Relatedly, in my experience ‘writing an article or blog post’ can have bad effects on my ability to reason about stuff. I want to say things that are relevant and congruent and that flow together nicely; but my actual thought process includes a bunch of zig-zagging and updating and sorting-through-thoughts-that-don’t-initially-make-perfect-crisp-sense. So focusing on the writing makes me focus less on my thought process, and it becomes tempting to for me confuse the writing process or written artifact for my thought process or beliefs.
You’ve spent a lot of time living and breathing EA/rationalist stuff, so I don’t know that I have any advice that will be useful to you. But if I were giving advice to a random reporter, I’d warn about the above phenomenon and say that this can lead to overconfidence when someone’s just getting started adding probabilistic forecasts to their blogging.
I think this calibration-and-reflection bug is important—it’s a bug in your ability to recognize what you believe, not just in your ability to communicate it—and I think it’s fixable with some practice, without having to do the superforecaster ‘sink lots of hours into getting expertise about every topic you predict’ thing.
(And I don’t know, maybe the journey to fixing this could be an interesting one that generates an article of its own? Maybe a thing that could be linked to at the bottom of posts to give context for readers who are confused about why the numbers are there and why they’re so low-confidence?)
all this makes a lot of sense, by the way, and I will take it on board.
I agree with these comments, and think the first one—“If you haven’t spent time on calibration training...”—makes especially useful points.
Readers of this thread may also be interesting in a previous post of mine on Potential downsides of using explicit probabilities. (Though be warned that the post is less concise and well-structured than I’d aim for nowadays.) I ultimately conclude that post by saying:
(That quote and post is obviously is somewhat tangential to this thread, but also somewhat relevant. I lightly edited that quote to make it make more sense of of context.)
I will look at that OpenPhil thing! I did do a calibration exercise with GJP (and was, to my surprise, both quite good and underconfident!) but I’d love to have another go.