Yeah. I said some about that in the ACX thread in an exchange with a Jeffrey Soreff here. Initially I was talking about a “maximally informed” forecaster/trader, but then when Jeffrey pointed out that that term was ill-defined, I realized that I had a lower-bar level of informed in mind that was more practically possible than some notions of “maximally informed.”
What steps do you think one could have taken to have reached, say, a 70% credence?
Basically just steps to become more informed and steps to have better judgment. (Saying specifically what knowledge would be sufficient to be able to form a forecast of 70% seems borderline impossible or at least extremely difficult.)
Before the election I was skeptical that people like Nate Silver and his team and The Economist’s election modeling team were actually doing as good a job as they could have been[1] forecasting who’d win the election and now post-election I still remain skeptical that their forecasts were close to being the best they could have been.
[1] “doing as good a job as they could have been” meaning I think they would have made substantially better forecasts in expectation (lower Brier scores in expectation) if figuring out who was going to win was really important to them (significantly more important than it actually was), and if they didn’t care about the blowback for being “wrong” if they made a confident wrong-side-of-maybe forecast, and if they were given a big budget to use to do research and acquire information (e.g. $10M), and if they were highly skilled forecasters with great judgment (like the best in the world but not superhuman (maybe Nate Silver is close to this—IDK; I read his book The Signal and the Noise, but it seems plausible that there could still be substantial room for him to improve his forecasting skill)).
I’m in agreement with the point that the aleatoric uncertainty was a lot lower than the epistemic uncertainty, but we know how prediction error from polls arises: it arises because a significant and often systematically skewed in favour of one candidate subset of the population refuse to answer them, other people end up making late decisions to (not) vote and it’s different every time. There doesn’t seem to be an obvious way to solve epistemic uncertainty around that with more money or less risk aversion, still less to 90% certainty with polls within the margin of error (which turned out to be reasonably accurate).
Market participants don’t have to worry about blowback from being wrong as much as Silver but also didn’t think Trump was massively underpriced [before Theo, who we know had a theory but less relevant knowledge, stepped in, and arguably even afterwards if you think 90% certainty was feasible]. And for all that pollsters get accused of herding, some pollsters weren’t afraid to share outlier polls, it’s just that they went in both directions (with Selzer, who is one of the few with enough of a track record to not instantly write off her past outlier successes as pure survivorship bias, publishing the worst of the lot late in the day). So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...
So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...
I’m skeptical that nobody was rationally (i.e. not overconfidentally) at >65% belief Trump would win before election day. Presumably a lot of people holding Yes and buying Yes when Polymarket was at ~60% Trump believed Trump was >65% likey to win, right? And presumably a lot of them cashed in for a lot of money. What makes you think nobody was at >65% without being overconfident?
I’ll grant the French whale was overconfident, since it seems very plausible that he was overconfident, though I don’t know that for sure, but that doesn’t mean everyone >65% was overconfident.
I’ll also note that just because the market was at ~60% (or whatever precisely) does not mean that there could not have been people participating in the market who were significantly more confident yhat Trump would win and rationally-so.
Sure, just because the market was at 60% doesn’t mean that nobody participating in it had 90% confidence, though when it’s a thin market that indicates they’re either cash constrained or missing out on easy, low-risk short term profit. I have the bigger question about why no psephologists, who one would think are the people most likely to have a knowledge advantage as well as good prediction skill, don’t have to risk savings and actually have non-money returns skewed in their favour (everyone remembers when they make a right outlying call, few people remember the wrong outliers) seemed able to come up with explanation of why the sources of massive polling uncertainty were actually not sources of massive uncertainty.
And the focus of my argument was that in order to rationally have 65-90% confidence in an outcome when the polls in all the key states were within the margin for error and largely dependent on turnout and how “undecideds” vote, people would have to have some relevant knowledge of systematic error in polling, turnout or how “undecideds” would vote which either eliminated all sources of uncertainties or justified their belief everyone else’s polls were skewed[1]. I don’t see any particular reason to believe the means to obtain that knowledge existed and was used when you can’t tell me what that might look like, never mind how a small number of apparently resource-poor people obtained it…
The fact that most polls both correctly pointed to a Trump electoral college victory but also had a sufficiently wide margin of error to call a couple individual states (and the popular vote) wrongly, is in line with “overcautious pollsters are exaggerating the margin for error” or “pollsters don’t want Trump to look like he’ll win” not being well-justified reasons to doubt their validity
I don’t see any particular reason to believe the means to obtain that knowledge existed and was used when you can’t tell me what that might look like, never mind how a small number of apparently resource-poor people obtained it…
I wasn’t a particularly informed forecaster, so me not telling you what information would have been sufficient to justify a rational 65+% confidence in Trump winning shouldn’t be much evidence to you about the practicality of a very informed person reaching 65+% credence rationally. Identifying what information would have been sufficient is a very time-intensive, costly project, and given I hadn’t done it already I wasn’t about to go spend months researching the data that people in principle had access to that might have led to a >65% forecast just to answer your question.
Prior to the election, I had an inside view credence of 65% that Trump would win, but considered myself relatively uninformed and so I meta-updated on election models and betting market prices to be more uncertain, making my all-things-considered view closer to 50⁄50. As I wrote on November 4th:
My 2⁄10 low information inside view judgment is that Trump is about 65% likely to win PA and the election. My all-things-considered view is basically 50%.
However, notably, after about 10 hours of thinking about who will win in the last week, I don’t know if I actually trust Nate and prediction markets to be doing a good job. I suspect that there may be well-informed people in the world who *know* that the markets are wrong and have justified “true” beliefs that one candidate is >65% likely to win. Such people presumably have a lot of money on the line, but not enough to more [sic] the market prices far from 50%.
So I held this suspicion before the election, and I hold it still. I think it’s likely that such forecasters with rational credences of 65+% Trump victory did exist, and even if they didn’t, I think it’s possible that they could have existed if more people cared more about finding out the truth of who would win.
Could you say more about “practically possible”? What steps do you think one could have taken to have reached, say, a 70% credence?
Yeah. I said some about that in the ACX thread in an exchange with a Jeffrey Soreff here. Initially I was talking about a “maximally informed” forecaster/trader, but then when Jeffrey pointed out that that term was ill-defined, I realized that I had a lower-bar level of informed in mind that was more practically possible than some notions of “maximally informed.”
Basically just steps to become more informed and steps to have better judgment. (Saying specifically what knowledge would be sufficient to be able to form a forecast of 70% seems borderline impossible or at least extremely difficult.)
Before the election I was skeptical that people like Nate Silver and his team and The Economist’s election modeling team were actually doing as good a job as they could have been[1] forecasting who’d win the election and now post-election I still remain skeptical that their forecasts were close to being the best they could have been.
[1] “doing as good a job as they could have been” meaning I think they would have made substantially better forecasts in expectation (lower Brier scores in expectation) if figuring out who was going to win was really important to them (significantly more important than it actually was), and if they didn’t care about the blowback for being “wrong” if they made a confident wrong-side-of-maybe forecast, and if they were given a big budget to use to do research and acquire information (e.g. $10M), and if they were highly skilled forecasters with great judgment (like the best in the world but not superhuman (maybe Nate Silver is close to this—IDK; I read his book The Signal and the Noise, but it seems plausible that there could still be substantial room for him to improve his forecasting skill)).
I’m in agreement with the point that the aleatoric uncertainty was a lot lower than the epistemic uncertainty, but we know how prediction error from polls arises: it arises because a significant and often systematically skewed in favour of one candidate subset of the population refuse to answer them, other people end up making late decisions to (not) vote and it’s different every time. There doesn’t seem to be an obvious way to solve epistemic uncertainty around that with more money or less risk aversion, still less to 90% certainty with polls within the margin of error (which turned out to be reasonably accurate).
Market participants don’t have to worry about blowback from being wrong as much as Silver but also didn’t think Trump was massively underpriced [before Theo, who we know had a theory but less relevant knowledge, stepped in, and arguably even afterwards if you think 90% certainty was feasible]. And for all that pollsters get accused of herding, some pollsters weren’t afraid to share outlier polls, it’s just that they went in both directions (with Selzer, who is one of the few with enough of a track record to not instantly write off her past outlier successes as pure survivorship bias, publishing the worst of the lot late in the day). So I think the suggestion that there was some way to get to 65-90% certainty which apparently nobody was willing to either make with substantial evidence or cash in on to any significant extent is a pretty extraordinary claim...
I’m skeptical that nobody was rationally (i.e. not overconfidentally) at >65% belief Trump would win before election day. Presumably a lot of people holding Yes and buying Yes when Polymarket was at ~60% Trump believed Trump was >65% likey to win, right? And presumably a lot of them cashed in for a lot of money. What makes you think nobody was at >65% without being overconfident?
I’ll grant the French whale was overconfident, since it seems very plausible that he was overconfident, though I don’t know that for sure, but that doesn’t mean everyone >65% was overconfident.
I’ll also note that just because the market was at ~60% (or whatever precisely) does not mean that there could not have been people participating in the market who were significantly more confident yhat Trump would win and rationally-so.
Sure, just because the market was at 60% doesn’t mean that nobody participating in it had 90% confidence, though when it’s a thin market that indicates they’re either cash constrained or missing out on easy, low-risk short term profit. I have the bigger question about why no psephologists, who one would think are the people most likely to have a knowledge advantage as well as good prediction skill, don’t have to risk savings and actually have non-money returns skewed in their favour (everyone remembers when they make a right outlying call, few people remember the wrong outliers) seemed able to come up with explanation of why the sources of massive polling uncertainty were actually not sources of massive uncertainty.
And the focus of my argument was that in order to rationally have 65-90% confidence in an outcome when the polls in all the key states were within the margin for error and largely dependent on turnout and how “undecideds” vote, people would have to have some relevant knowledge of systematic error in polling, turnout or how “undecideds” would vote which either eliminated all sources of uncertainties or justified their belief everyone else’s polls were skewed[1]. I don’t see any particular reason to believe the means to obtain that knowledge existed and was used when you can’t tell me what that might look like, never mind how a small number of apparently resource-poor people obtained it…
The fact that most polls both correctly pointed to a Trump electoral college victory but also had a sufficiently wide margin of error to call a couple individual states (and the popular vote) wrongly, is in line with “overcautious pollsters are exaggerating the margin for error” or “pollsters don’t want Trump to look like he’ll win” not being well-justified reasons to doubt their validity
I wasn’t a particularly informed forecaster, so me not telling you what information would have been sufficient to justify a rational 65+% confidence in Trump winning shouldn’t be much evidence to you about the practicality of a very informed person reaching 65+% credence rationally. Identifying what information would have been sufficient is a very time-intensive, costly project, and given I hadn’t done it already I wasn’t about to go spend months researching the data that people in principle had access to that might have led to a >65% forecast just to answer your question.
Prior to the election, I had an inside view credence of 65% that Trump would win, but considered myself relatively uninformed and so I meta-updated on election models and betting market prices to be more uncertain, making my all-things-considered view closer to 50⁄50. As I wrote on November 4th:
So I held this suspicion before the election, and I hold it still. I think it’s likely that such forecasters with rational credences of 65+% Trump victory did exist, and even if they didn’t, I think it’s possible that they could have existed if more people cared more about finding out the truth of who would win.