Physician—general internal medicine. Interested in progress, existential risk and epistemology. Did some earning to give, but now have my doubts. Heavier on ideas than execution.
astupple
Basically, predictions about the future are fine as long as they include the caveat “unless we figure out something else.” That caveat can’t be ascribed a meaningful probability because we can’t know discoveries before we discovery them, we can’t know things before we know them.
Beautiful! We can’t determine “something we haven’t thought of” as simply “1 - all the things we’ve thought of”.
I mistakenly included my response to another comment, I’m pasting it below.
I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
Great point—Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point—Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters—they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer—he was familiar with the state of atomic physics and therefore many of the relevant discoveries—he even dedicated the book to an atomic scientist. And Wells’s “atomic bombs” were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It’s pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don’t think we are seeing knowledge of discoveries before they are discovered.
Szilard’s prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus’s predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like “but people have forecasted things shortly before they have appeared.” True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells’s idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that’s interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it’s one thing to make a great guess about the future. It’s entirely different to quantify the likelihood of this guess—I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.
I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
Great point—Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point—Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle those discoveries). I think this also applies to superforecasters—they become like Szilard, learning of the relevant discoveries and then foreseeing the engineering steps.
Regarding sci-fi, Szilard appears to have been influenced by HG Wells The World Set Free in 1913. But HG Wells was not just a writer—he was familiar with the state of atomic physics and therefore many of the relevant discoveries—he even dedicated the book to an atomic scientist. And Wells’s “atomic bombs” were lumps of a radioactive substance that issued energy from a chain reaction, not a huge stretch from what was already known at the time. It’s pretty incredible that Szilard later is credited with foreseeing nuclear chain reactions in 1933 shortly after the discovery of neutrons, and he was likely influenced by Wells. So Wells is a great thinker, and this nicely illustrated how knowledge grows, by excellent guesses refined by criticism/experiment. But I don’t think we are seeing knowledge of discoveries before they are discovered.
Szilard’s prediction in 1939 is a lot different than a similar prediction in 1839. Any statement about weapons in 1839 is like Thomas Malthus’s predictions in a state of utter ignorance and unknowability about the eventual discoveries relevant to his forecast (nitrogen fixation and genetic modification of crops).
And this is also the case with discoveries in the long term from now.
Objections to my post read to me like “but people have forecasted things shortly before they have appeared.” True, but those forecasts already have much of the relevant discoveries already factored in, though largely invisible to non-experts.
Szilard must have seemed like a prophet to someone unfamiliar with the state of nuclear physics. You could understand a Tetlock who find these seeming prophets among us and declares that some amount of prophesy is indeed possible. But to Wells, Szilard was just making a reasonable step from Wells’s idea, which was a reasonable step from earlier discoveries.
As for science fiction writers in general, that’s interesting. Obviously, selection effects will be strong (stories that turn out true will become famous), and good science fiction writers are more familiar with the state of the science than others. And finally, it’s one thing to make a great guess about the future. It’s entirely different to quantify the likelihood of this guess—I doubt even Jules Verne would try to put a number on the likelihood that submarines would eventually be developed.
I have to look at Tetlock again—there’s a difference between predicting what will be determined to be the cause of Arafat’s death (historical, fact collecting) and predicting how new discoveries in the future will affect future politics. Nonetheless, I wouldn’t be surprised that some people are better than others at predicting future events in human affairs. An example would be predicting that Moore’s Law holds next year. In such a case, one could understand the engineering that is necessary to improve computer chips, perhaps understanding that production of a necessary component will half in price next year based on new supplies being uncovered in some mine. This is more knowledge of slight modifications of current understanding (basically, engineering vs. basic science research). It’s certainly important and impressive, but it’s more refining existing knowledge rathe rather than making new discoveries. Though I do recognize this response reads like me moving the goal posts....
Nice point about human development… I’m not sure how it relates. It seems to me this is biology playing out at a predictable pace. I’d bet that the elements of language development that are not dependent on biology vary greatly in their timelines, and the regularity that this research is discovering is almost purely biological. If we had the technology to do so, we could alter this biological development, and suddenly the old rules about milestones would fail. Put another way—reproducible experiments in psychology tell us about physiology of the brain, but nothing about minds, because mental phenomena are not predictable.
The periodic table is a perfect example of what I’m talking about—Mendelev discovered the periodicity, and then was able to predict features of the natural world (that certain chemical properties would conform to this theory.) So, periodicity was the discovery, and fitting in the elements just conformed to the original discovery.
Here’s another way to put my argument—imagine if every person were given a honda civic at age 16. You could imagine that most people would drive honda civics. An alien observer could think “humans are pre-programmed to choose honda civics.” But in fact, we are free to choose any car we want, it’s just that it’s really handy to keep driving the car we were given. Similarly in the real world—there are commonalities and propensities that can be picked up on by superforecasters, but that doesn’t mean they can’t be overwritten if someone has a mind to do so.
Great points though, I’ve got some thinking to do.
An update that came from the discussion:
Let’s split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they’ll be nonsensical. That’s because we cannot know how knowledge creation will affect 2. We don’t even need any fancy reasoning, it’s already implied in the definition of terms like knowledge creation and discovery. You can’t discover something before you discover it, before it’s created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don’t get involved. However, once we are capable, there’s no way now to know what we’ll do with the planets and asteroids in the future. Maybe we’ll find use for some mineral found predominantly in some asteroids, or maybe we’ll use a planet to block heat from the sun as it expands, or maybe we’ll detect some other risk/benefit and make changes accordingly. In fact, this last type of change will predominate the farther we get into the future.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there’s no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that’s why we know a fair bit about them). We shouldn’t waste effort trying to calculate the risk because we can’t do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey—if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it’s worth it to take some precautions.
Thank you!
How about this: let’s split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they’ll be nonsensical. That’s because we cannot know how knowledge creation will affect 2. We don’t even need any fancy reasoning, it’s already implied in the definition of terms like knowledge creation and discovery. You can’t discover something before you discover it, before it’s created.
So, up until recently, the bodies of the solar system fell into category 1. We can predict their positions many years hence, as long as people don’t get involved. However, once we are capable, there’s no way now to know what we’ll do with the planets and asteroids in the future. Maybe we’ll find use for some mineral found predominantly in some asteroids, or maybe we’ll use a planet to block heat from the sun as it expands, or maybe we’ll detect some other risk/benefit and make changes accordingly.
This is an extreme example, but it applies across the board. Any time human knowledge creation impacts a system, there’s no way to model that impact before the knowledge is created.
Therefore, longtermism hinges on the idea that we have some idea of how to impact the long term future. But even more than the solar system example, that future will be overwhelmingly dominated by new knowledge, and hence unknowable to us to today, unable to be anticipated.
Sure, we can guess, and in the case of known future threats like nuclear war, we should guess and should try to ameliorate risk. But those problems apply to the very near future as well, they are problems facing us today (that’s why we know a fair bit about them). We shouldn’t waste effort trying to calculate the risk because we can’t do that for items in group 2. Instead, we know from our best explanations that nuclear war is a risk.
In this way the threat of nuclear is like the turkey—if the turkey even hears a rumor about thanksgiving traditions, should it sit down and try to update its priors? Or take the entirely plausible theory seriously, try to test it (have other turkeys been slaughtered? Are there any turkeys over a year old?) And decide if it’s worth it to take some precautions.
A thanksgiving turkey has an excellent model that predicts the farmer wants him to be safe and happy. But an explanation of thanksgiving traditions tells us a lot more about the risks of slaughter than the number of days the turkey has been fed and protected.
With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict.
Just like with the turkey, we should pay attention to the explanation, not just try to make predictions based on past data.
With all of this, probability terminology is baked into the language and it is hard to speak without incorporating it. With the previous post, it was co-authored, and I wanted to remove that phrase, but concessions were made.
Making an estimate about something you’re unaware of is like guessing the likelihood of the discovery of nuclear energy in 1850.
I can put a number on the likelihood of discovering something totally novel, but applying a number doesn’t mean it’s meaningful. A psychic could make quantified guesses and tell us about the factors involved in that assessment, but that doesn’t make it meaningful.
I’m saying the opposite—you can’t rank the difficulty of unsolved problems if you don’t know what’s required to solve them. That’s what yet-to-be-discovered means, you don’t know the missing bit, so you can’t compare.
It’s not that “it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that’s how it goes.” It’s that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It’s not that he didn’t know, it’s that he COULDN’T know, and neither could anyone else who hadn’t made the discovery.
But such work would undoubtedly produce unanticipated and destabilizing discoveries. You can’t grow knowledge in foreseeable ways, with only foreseeable consequences.
I’d take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research, this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is “roughly informed,” and I can enroll in Phil Tetlock’s forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about cold fusion.
Imagine asking Andrew Wiles the morning of his discovery if he wanted to bet that a solution would be found that afternoon. Given his despair, he might take 100x against. And this subjective sense of things would indeed be well-formed, he could talk to us for hours about why his approach doesn’t work. And we’d come away convinced—it’s hopeless. But that feeling of hopelessness, unlikelihood, despair—they have nothing to do with the math.
Estimating what remains to be discovered for a breakthrough is like trying to measure a gap but not knowing where to place the other end of the ruler.
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was “likely enough to be worth it.” And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he’d still consider quitting. He might even be more convinced that it’s hopeless.
Refuting longtermism with Fermat’s Last Theorem
The danger of nuclear war is greater than it has ever been. Why donating to and supporting Back from the Brink is an effective response to this threat
Yes, but what I’m getting at is How do we know there’s a limited number of low hanging fruit? Or, as we make progress, don’t previously high fruit come into reach? AND, more progress opens more markets/fields.
It seems to me low hanging fruit is a bad analogy because there’s not way to know the number of undiscovered fruit out there. And perhaps it’s infinite. Or, it INCREASES the more we figure out.
My two cents—stagnation isn’t due to supply of good ideas waiting to be discovered, it’s stifling of free and open exploration by our norms that promote institutionalization of discovery.
How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles, ie low hanging fruit, are out there?
Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.
I love it. Creating lists of plausible outcomes is very valuable, we can leave alone to idea of assigning probabilities.