Let’s say I offered you to bet that we’ll have a commercially viable nuclear fusion plant operating by 2030 and I said you could take the bet in favour at 100-1 odds, or against, also at 100-1 odds.
(So in the first case you ~100x your money if it happens, in the second you ~100x your money if it doesn’t.)
Would you be neutral between taking the ‘yes’ bet and the ‘no’ bet?
If not, I think it’s because you know we can form roughly informed views and expectations about how likely various advances are, using all kinds of different methods, and need not be completely agnostic.
If you would be indifferent, I think your view is untenable and I would like to make a lot of bets about future technological/scientific progress with you.
I’d take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research, this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is “roughly informed,” and I can enroll in Phil Tetlock’s forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about cold fusion.
Imagine asking Andrew Wiles the morning of his discovery if he wanted to bet that a solution would be found that afternoon. Given his despair, he might take 100x against. And this subjective sense of things would indeed be well-formed, he could talk to us for hours about why his approach doesn’t work. And we’d come away convinced—it’s hopeless. But that feeling of hopelessness, unlikelihood, despair—they have nothing to do with the math.
Estimating what remains to be discovered for a breakthrough is like trying to measure a gap but not knowing where to place the other end of the ruler.
It’s hard to follow your argument, but how is any of this different from “someone thought X was very unlikely but then X happened, so this shows estimating the likelihood of future events is fundamentally impossible and pointless.”
That line of reasoning clearly doesn’t work.
Things we assign low probability to in highly uncertain areas happen all the time — but that is exactly what we should expect and is consistent with our credences in many areas being informative and useful.
It’s not that “it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that’s how it goes.” It’s that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It’s not that he didn’t know, it’s that he COULDN’T know, and neither could anyone else who hadn’t made the discovery.
Let’s say I offered you to bet that we’ll have a commercially viable nuclear fusion plant operating by 2030 and I said you could take the bet in favour at 100-1 odds, or against, also at 100-1 odds.
(So in the first case you ~100x your money if it happens, in the second you ~100x your money if it doesn’t.)
Would you be neutral between taking the ‘yes’ bet and the ‘no’ bet?
If not, I think it’s because you know we can form roughly informed views and expectations about how likely various advances are, using all kinds of different methods, and need not be completely agnostic.
If you would be indifferent, I think your view is untenable and I would like to make a lot of bets about future technological/scientific progress with you.
I’d take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research, this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is “roughly informed,” and I can enroll in Phil Tetlock’s forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about cold fusion.
Imagine asking Andrew Wiles the morning of his discovery if he wanted to bet that a solution would be found that afternoon. Given his despair, he might take 100x against. And this subjective sense of things would indeed be well-formed, he could talk to us for hours about why his approach doesn’t work. And we’d come away convinced—it’s hopeless. But that feeling of hopelessness, unlikelihood, despair—they have nothing to do with the math.
Estimating what remains to be discovered for a breakthrough is like trying to measure a gap but not knowing where to place the other end of the ruler.
It’s hard to follow your argument, but how is any of this different from “someone thought X was very unlikely but then X happened, so this shows estimating the likelihood of future events is fundamentally impossible and pointless.”
That line of reasoning clearly doesn’t work.
Things we assign low probability to in highly uncertain areas happen all the time — but that is exactly what we should expect and is consistent with our credences in many areas being informative and useful.
It’s not that “it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that’s how it goes.” It’s that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It’s not that he didn’t know, it’s that he COULDN’T know, and neither could anyone else who hadn’t made the discovery.