EDIT: This comment accumulated a lot of disagreement karma. If anyone would like to offer their reasons for disagreement, I might learn something. I wonder if the disagreements are with my choices of examples, or the substance of my prediction model, or something else.
Do you think a futurist’s job is to:
track trends and extrapolate them
paint a positive picture of the future
create a future that suits their interests
Longtermists devote some of their attention to a positive vision of the future, not a prediction of how things are likely to go. I question the vision’s likelihood and appeal.
A prediction like “environmental doomsday is coming and hyperconservation will pervade everything” assumes that rationality, ethics, and mutual accommodation will determine policy responses and public sentiment. The global scene of resource management looks quite different.
If the goal were to paint an accurate vision of the future, catastrophe would be a good choice, assuming current approaches decide the future as well.
If the goal were to offer a positive vision, hyperconservation could be part of it, because that vision includes reliance on positive values.
If the goal were to create a future to suit the futurist, the futurist would offer predictions that include their investments (for example, in fusion power or desalination or CCS) and distort the implications. At least, that’s the typical scenario. Futurists with money tied to their predictions have an interest in creating self-serving predictions.
When you write
Instead, poor predictors often pick a few predictions that were accurate or at least vaguely sounded similar to an accurate prediction and use those to sell their next generation of predictions to others.
I wonder about judging predictions by motive. If the prediction really is for sale, then I should get my money’s worth. If I were to buy a prediction from someone, I would throw out their subjective probability estimates and ask for their ontology and the information that they matched to it.
What you do with this sort of investigation is explore scenarios and gain useful knowledge. You develop predictive information as you revise your ontology, the web of beliefs about entities and their relationships that you match to the world to understand what is going on and what is going to happen. A domain expert or a good predictor offers ontology information, or real world information, that you can add to your own.
A good predictor has good research and critical thinking skills. They use those skills to decide plausibility of ontology elements and credibility of sources. They gather information and become a bit of a domain expert. In addition, they develop means to judge relevant information, so that they can make a prediction with a relatively sparse ontology that they curate.
In my red-team submission, I said that decisions about relevance are based on beliefs. How you form your beliefs, constrain them, and add or remove them, is what you do to curate your ontology. Given a small set of the right information a good predictor has a specific ontology that lets them identify entities in the real world and match them to ontology relationships and thereby predict an outcome implied by their ontology.
One implication of my model summarized here is that a predictor in a domain is only as good as:
their ontology for that domain
the information to which they have access
the question they answer with their prediction
If the ontology describes entities and relationships relevant to the question, then you can get an answer quickly. Some of your futurists might have lacked in one or more of those areas, including the questions that they attempted to answer.
A rephrase of a question can get a good answer from a reliable predictor when the original question returned a prediction that mismatched the eventual outcome.
It might be helpful to look at:
qualifying questions A less qualified question is something like, “If Greenland were to melt away someday, would that cause civilizational collapse?”, and given its generality, the answer is “No.” A more qualified question is something like: “If Greenland were to melt entirely within 15 years starting in 2032, would that cause civilizational collapse?” and the answer is “Yes.” (I believe)
answering with certain alternatives This sort of answer limits the possibilities. So a question like: “Will acidification of the ocean reduce plankton populations?” has an answer “Acidification of the ocean will reduce plankton populations or alter their species composition.”
answering with plausible possibilities This sort of answer offers new possibilities. So a question like: “Could Greenland melt down earlier than anticipated?” has an answer “The meandering jet stream could park weather systems over Greenland that alternate heat waves with heavy rains [not snow] lasting for weeks on end.” and other answers as well.
answering with statistical data This sort of answer is not the typical bayesian answer, but I think you understand it well, probably better than I do. So a question like, “Is medication X effective for treating disease Y?” has an answer based on clinical trial data “Medication X removed all symptoms of disease Y in 85% of patients treated with X.”
answering with “unknown” This sort of answer looks the same in the every case, “The answer is unknown.” but tells you something specific that a Bayesian probability would not. An answer based on Bayesian probabilities offers a blind guess and gives false confidence. “The answer is unknown.” tells you that you do not have any information with which to answer the question or are asking the wrong question. So a question like “Are their significant impacts of micro-plastics and pollutants on fish populations over the next 10 years?” answered with “The answer is unknown.” tells you that you need more information from somewhere. If you only have your own ontology, you can follow up with a different question, perhaps by asking a better-qualified question, or asking for alternatives, or for some plausible possibilities, or for some statistical data. For example, “Are there no impacts of micro-plastics and pollutants on fish populations?” has the answer “No.” and “What do micro-plastics and pollutants do to fish exposed to them?” offers more information that could lead to a question that gets you somewhere in making predictions.
There’s a few other things worth considering about futurist predictions (for example, do any ever qualify their answer with ”...or something else will happen.”) when they are asked about what will happen.
Anyway, best of luck with your analysis of these futurists and their failures, Linch.
EDIT: This comment accumulated a lot of disagreement karma. If anyone would like to offer their reasons for disagreement, I might learn something. I wonder if the disagreements are with my choices of examples, or the substance of my prediction model, or something else.
Do you think a futurist’s job is to:
track trends and extrapolate them
paint a positive picture of the future
create a future that suits their interests
Longtermists devote some of their attention to a positive vision of the future, not a prediction of how things are likely to go. I question the vision’s likelihood and appeal.
A prediction like “environmental doomsday is coming and hyperconservation will pervade everything” assumes that rationality, ethics, and mutual accommodation will determine policy responses and public sentiment. The global scene of resource management looks quite different.
If the goal were to paint an accurate vision of the future, catastrophe would be a good choice, assuming current approaches decide the future as well.
If the goal were to offer a positive vision, hyperconservation could be part of it, because that vision includes reliance on positive values.
If the goal were to create a future to suit the futurist, the futurist would offer predictions that include their investments (for example, in fusion power or desalination or CCS) and distort the implications. At least, that’s the typical scenario. Futurists with money tied to their predictions have an interest in creating self-serving predictions.
When you write
I wonder about judging predictions by motive. If the prediction really is for sale, then I should get my money’s worth. If I were to buy a prediction from someone, I would throw out their subjective probability estimates and ask for their ontology and the information that they matched to it.
What you do with this sort of investigation is explore scenarios and gain useful knowledge. You develop predictive information as you revise your ontology, the web of beliefs about entities and their relationships that you match to the world to understand what is going on and what is going to happen. A domain expert or a good predictor offers ontology information, or real world information, that you can add to your own.
A good predictor has good research and critical thinking skills. They use those skills to decide plausibility of ontology elements and credibility of sources. They gather information and become a bit of a domain expert. In addition, they develop means to judge relevant information, so that they can make a prediction with a relatively sparse ontology that they curate.
In my red-team submission, I said that decisions about relevance are based on beliefs. How you form your beliefs, constrain them, and add or remove them, is what you do to curate your ontology. Given a small set of the right information a good predictor has a specific ontology that lets them identify entities in the real world and match them to ontology relationships and thereby predict an outcome implied by their ontology.
One implication of my model summarized here is that a predictor in a domain is only as good as:
their ontology for that domain
the information to which they have access
the question they answer with their prediction
If the ontology describes entities and relationships relevant to the question, then you can get an answer quickly. Some of your futurists might have lacked in one or more of those areas, including the questions that they attempted to answer.
A rephrase of a question can get a good answer from a reliable predictor when the original question returned a prediction that mismatched the eventual outcome.
It might be helpful to look at:
qualifying questions
A less qualified question is something like, “If Greenland were to melt away someday, would that cause civilizational collapse?”, and given its generality, the answer is “No.”
A more qualified question is something like: “If Greenland were to melt entirely within 15 years starting in 2032, would that cause civilizational collapse?” and the answer is “Yes.” (I believe)
answering with certain alternatives
This sort of answer limits the possibilities. So a question like: “Will acidification of the ocean reduce plankton populations?” has an answer “Acidification of the ocean will reduce plankton populations or alter their species composition.”
answering with plausible possibilities
This sort of answer offers new possibilities. So a question like: “Could Greenland melt down earlier than anticipated?” has an answer “The meandering jet stream could park weather systems over Greenland that alternate heat waves with heavy rains [not snow] lasting for weeks on end.” and other answers as well.
answering with statistical data
This sort of answer is not the typical bayesian answer, but I think you understand it well, probably better than I do. So a question like, “Is medication X effective for treating disease Y?” has an answer based on clinical trial data “Medication X removed all symptoms of disease Y in 85% of patients treated with X.”
answering with “unknown”
This sort of answer looks the same in the every case, “The answer is unknown.” but tells you something specific that a Bayesian probability would not. An answer based on Bayesian probabilities offers a blind guess and gives false confidence. “The answer is unknown.” tells you that you do not have any information with which to answer the question or are asking the wrong question.
So a question like “Are their significant impacts of micro-plastics and pollutants on fish populations over the next 10 years?” answered with “The answer is unknown.” tells you that you need more information from somewhere. If you only have your own ontology, you can follow up with a different question, perhaps by asking a better-qualified question, or asking for alternatives, or for some plausible possibilities, or for some statistical data. For example, “Are there no impacts of micro-plastics and pollutants on fish populations?” has the answer “No.” and “What do micro-plastics and pollutants do to fish exposed to them?” offers more information that could lead to a question that gets you somewhere in making predictions.
There’s a few other things worth considering about futurist predictions (for example, do any ever qualify their answer with ”...or something else will happen.”) when they are asked about what will happen.
Anyway, best of luck with your analysis of these futurists and their failures, Linch.
To be clear, this is not written by me but by Dan Luu. Sorry if my post was unclear!
OK, I got it, no problem.