Can you give examples of how these techniques helped you be better at forecasting?
Hmm, what has helped me be a better forecaster is developing a useful and small ontology in areas that interest me and then researching to validate specifics of it that lead quickly to useful predictions.I match ontological relations to research information to develop useful expectations about my current pathway.
As far as predictions relevant to EA’s bigger picture doomy stuff, uhh, I have done a bit of forecasting (on climate at gjopen, you’ll have to browse history there) and a bit of backcasting (with results that might not fit the forum’s content guidelines), but without impact by doing so, as far as I know.
Linch, if you look at what I’m describing, it’s not far from combining forward chaining to scenarios and backward chaining from scenarios, and people do that all the time. As far as how it helps my forecasting abilities, it doesn’t. It helps my planning abilities.
That said, I’m drawn to convenient mathematical methods to simplify complex estimates and want to learn more about them, and I am not trying to subvert forecasters or their methods. However, overall, the use of probability in forecasting mostly upsets me. I see the failure that it has meant for climate change policy, for example, and I get upset[1]. But I think it’s cool for simpler stuff with better evidence that doesn’t involve existential or similar risks.
The rest of this is not specifically written for you, Linch, though it is a general response to your statements. I suspect that your words reflect a consensus view and common misinterpretation of what I wrote. Some EA community members are focused on subjective probabilities and forecasting of novel one-off extreme negative events. They take it seriously. I get that. If my ideas matter so little, I’m confident that they will ignore them.
EAs can use their forecast models to develop plans
To me, a prediction is meant to convey confidence in a particular outcome, so confidence that one event from a list of several will occur, given the current situation. Considering all possible future events as uncertain (unknown in their relative probability) is useful, as I wrote, to create a broad uncertainty about the future. To do so is not forecasting, obviously. However, for EA’s, what I suggest does have application for the whole point of your work in forecasting.
You folks work on serious forecasts of novel events (like an existential threat to humanity, what you sometimes call a “tail risk”) because you want to avoid them. Without experience of such situations in the past, you can only rely on models (ontologies) to guide your forecasts. Those models have useful information for planning in them, indirectly.[2]
Remember to dislike ambiguous pathways into the future
The problem I think any ethical group (such as the EA community) working to save humanity faces is that it can seem committed, in some respects, to ambiguous pathways that allow for both positive and negative futures.
You want vibrant economies and alleviation of poverty and long life for all but you fear global resource losses and environmental destruction.
You want AGI to save us all but also fear that AGI will abuse or kill us all.
You want modern civilization and development across the natural world and increasing technology for treating disease, but you fear natural and man-made pandemics.
… and so on.
As is, our pathways into the future are ambiguous, or are they? If they are not, then belief in some future possibilities drops to (effectively) zero or rises to one hundred percent. That’s how you know that your pathway is unambiguously going toward some specific future. Apparently, for EA folks, our pathways remain ambiguous with respect to many negative futures. You have special names for them, extinction risks, existential risks, suffering risks, and so on.
Use ontologies that inform your forecasts, but use them to create plans
My post was getting at how to use the ontologies that inform your forecasts, but in as simple terms as possible. But here is another formulation:
Create a list of alternative, mutually exclusive events (you already have them in your models that you use to forecast events like existential risk from AGI)
Don’t bother with betting odds (you already have those in your models,so you ditch the subjective probability numbers temporarily).
Examine your list of events (you rejected some as too unlikely, others as more or less likely)
Put yourself in a mindset of broad uncertainty about the future(get back the mindset you tried to lose with your forecasting effort).
To do that, you might have to change your understanding of what you believe about the current situation and the likelihood of alternative futures (for example, by arbitrarily reversing those likelihoods and backtracking).[3]
Backtrack from futures you consider unlikely to their implications for the present (you EA folks reject plausible futures on the regular but you need to understand the present and its path to the future differently to see yourself on a pathway to such rejected futures)
Examine the pathways to the events on your revised list (by this step you’re supposed to have figured out how a strange present situation leads you to a desirable or undesirable future)
Plan to exit negative pathways and enter positive pathways early (you now can see how far along you are and avoid negative futures and embrace positive ones. Disambiguate the pathways involved. Look for exclusively positive pathways.)
An EA shortcut to broad uncertainty about the future
A lot of outcomes are not considered “real” for people until they have some sort of pivotal experience that changes their perspective. By flip-flopping the probabilities that you expect (for example, 98% chance of human extinction from AGI), you give yourself permission to ask:
“What would that mean about the present, if that particular future were so likely?[4]”
Now, I think the way that some people would respond to that question is:
“Well, the disaster must have already happened and I just don’t know about it in this hypothetical present.”
If that’s all you’ve got, then it is helpful to keep a present frame of reference, and go looking for different ideas of the present and the likely future from others. Your own mental filtering and sorting just won’t let you answer the question well, even as a hypothetical, so find someone else who really thinks differently than you do.
“Broad uncertainty” does not imply that you think we are all doomed. However, since the chronic choice in EA is to think of unlikely doomish events as important, broad uncertainty about the future is a temporary solution. One way to get there is to temporarily increase your idea of the likelihood of those doomish events, and backtrack to what that must mean about the present[5]. By responding to broad uncertainty you can make plans and carry out actions before you resolve any amount of the uncertainty.
Using precondition information to change the future, in detail
Once you collect information about preconditions for the various positive or negative events:
find the preconditions for the occurrence of various positive or negative events (for example, releasing GHG’s in the atmosphere)
study how to quickly exit the pathways that enable negative events (for example, reducing GHG’s soon)
study how to quickly enter the pathways that enable positive events (for example, beginning oil/gas conservation efforts now)
Then, ideally, you, your group, your country, your global society, make that pathway change instead of procrastinating on that change and stressing a lot of people out.
Conclusion
So, what I presented earlier was just really shortform quality work on what could be called modeling, but is not forecasting. It is much simpler.
It’s just a way to temporarily (only temporarily, because I know EA’s love their subjective probabilities) withdraw from assigning probabilities to novel future events, events without earlier similar events to inform you of their likelihood.
Start with just {A, not A} as your possible events
build that out to {A,B,C,D,...}
note which events are negative or positive
work on exiting or entering pathways toward events that you dislike or like, once you learn:
which pathways you are headed down
which pathways are ambiguous
what exit or entry points are present on various paths.
It is important to return to your forecasts, take the list of compared alternative events in those forecasts, and learn more about how to avoid unfavorable alternatives, all while ignoring their likelihood. Those likelihood numbers could make you less effective, either by relieving you or discouraging you when neither’s appropriate.
Policy-makers choose present and short-term interests repeatedly over longer-term concerns. In the case of climate change and environmental concerns in general, the last 50 years of policy have effectively doomed humanity to a difficult 21st century.
Not the prediction probabilities, those should be thrown out, for the simple reason that, even if they are accurate, when you’re talking about the extinction of humanity, for example, the difference between a 1% and a 10% chance is not relevant to planning against the possibility.
Like most people, you could wonder what the point is of pretending that the unlikely is very likely. Well, what is the likelihood, in your mind, of having poor quality information about the present situation?
Rather than you having to have such an experience yourself, for example,in the case of climate change, by helplessly watching a beloved family member perish in a heat wave, you just temporarily pretend that future extreme climatic events are actually very likely.
When you backtrack, you are going through your pathway in reverse, from the future to the present. You study how you got to where you ended up from where you are right now. You can collect this information from others. If they predict doom, how did it happen? From that, you learn about what steps not to take going forward.
An answer to forecasting
Hmm, what has helped me be a better forecaster is developing a useful and small ontology in areas that interest me and then researching to validate specifics of it that lead quickly to useful predictions.I match ontological relations to research information to develop useful expectations about my current pathway.
As far as predictions relevant to EA’s bigger picture doomy stuff, uhh, I have done a bit of forecasting (on climate at gjopen, you’ll have to browse history there) and a bit of backcasting (with results that might not fit the forum’s content guidelines), but without impact by doing so, as far as I know.
Linch, if you look at what I’m describing, it’s not far from combining forward chaining to scenarios and backward chaining from scenarios, and people do that all the time. As far as how it helps my forecasting abilities, it doesn’t. It helps my planning abilities.
That said, I’m drawn to convenient mathematical methods to simplify complex estimates and want to learn more about them, and I am not trying to subvert forecasters or their methods. However, overall, the use of probability in forecasting mostly upsets me. I see the failure that it has meant for climate change policy, for example, and I get upset[1]. But I think it’s cool for simpler stuff with better evidence that doesn’t involve existential or similar risks.
The rest of this is not specifically written for you, Linch, though it is a general response to your statements. I suspect that your words reflect a consensus view and common misinterpretation of what I wrote. Some EA community members are focused on subjective probabilities and forecasting of novel one-off extreme negative events. They take it seriously. I get that. If my ideas matter so little, I’m confident that they will ignore them.
EAs can use their forecast models to develop plans
To me, a prediction is meant to convey confidence in a particular outcome, so confidence that one event from a list of several will occur, given the current situation. Considering all possible future events as uncertain (unknown in their relative probability) is useful, as I wrote, to create a broad uncertainty about the future. To do so is not forecasting, obviously. However, for EA’s, what I suggest does have application for the whole point of your work in forecasting.
You folks work on serious forecasts of novel events (like an existential threat to humanity, what you sometimes call a “tail risk”) because you want to avoid them. Without experience of such situations in the past, you can only rely on models (ontologies) to guide your forecasts. Those models have useful information for planning in them, indirectly.[2]
Remember to dislike ambiguous pathways into the future
The problem I think any ethical group (such as the EA community) working to save humanity faces is that it can seem committed, in some respects, to ambiguous pathways that allow for both positive and negative futures.
You want vibrant economies and alleviation of poverty and long life for all but you fear global resource losses and environmental destruction.
You want AGI to save us all but also fear that AGI will abuse or kill us all.
You want modern civilization and development across the natural world and increasing technology for treating disease, but you fear natural and man-made pandemics.
… and so on.
As is, our pathways into the future are ambiguous, or are they? If they are not, then belief in some future possibilities drops to (effectively) zero or rises to one hundred percent. That’s how you know that your pathway is unambiguously going toward some specific future. Apparently, for EA folks, our pathways remain ambiguous with respect to many negative futures. You have special names for them, extinction risks, existential risks, suffering risks, and so on.
Use ontologies that inform your forecasts, but use them to create plans
My post was getting at how to use the ontologies that inform your forecasts, but in as simple terms as possible. But here is another formulation:
Create a list of alternative, mutually exclusive events (you already have them in your models that you use to forecast events like existential risk from AGI)
Don’t bother with betting odds (you already have those in your models,so you ditch the subjective probability numbers temporarily).
Examine your list of events (you rejected some as too unlikely, others as more or less likely)
Put yourself in a mindset of broad uncertainty about the future (get back the mindset you tried to lose with your forecasting effort).
To do that, you might have to change your understanding of what you believe about the current situation and the likelihood of alternative futures (for example, by arbitrarily reversing those likelihoods and backtracking).[3]
Backtrack from futures you consider unlikely to their implications for the present (you EA folks reject plausible futures on the regular but you need to understand the present and its path to the future differently to see yourself on a pathway to such rejected futures)
Examine the pathways to the events on your revised list (by this step you’re supposed to have figured out how a strange present situation leads you to a desirable or undesirable future)
Plan to exit negative pathways and enter positive pathways early (you now can see how far along you are and avoid negative futures and embrace positive ones. Disambiguate the pathways involved. Look for exclusively positive pathways.)
An EA shortcut to broad uncertainty about the future
A lot of outcomes are not considered “real” for people until they have some sort of pivotal experience that changes their perspective. By flip-flopping the probabilities that you expect (for example, 98% chance of human extinction from AGI), you give yourself permission to ask:
“What would that mean about the present, if that particular future were so likely?[4]”
Now, I think the way that some people would respond to that question is:
“Well, the disaster must have already happened and I just don’t know about it in this hypothetical present.”
If that’s all you’ve got, then it is helpful to keep a present frame of reference, and go looking for different ideas of the present and the likely future from others. Your own mental filtering and sorting just won’t let you answer the question well, even as a hypothetical, so find someone else who really thinks differently than you do.
“Broad uncertainty” does not imply that you think we are all doomed. However, since the chronic choice in EA is to think of unlikely doomish events as important, broad uncertainty about the future is a temporary solution. One way to get there is to temporarily increase your idea of the likelihood of those doomish events, and backtrack to what that must mean about the present[5]. By responding to broad uncertainty you can make plans and carry out actions before you resolve any amount of the uncertainty.
Using precondition information to change the future, in detail
Once you collect information about preconditions for the various positive or negative events:
find the preconditions for the occurrence of various positive or negative events (for example, releasing GHG’s in the atmosphere)
study how to quickly exit the pathways that enable negative events (for example, reducing GHG’s soon)
study how to quickly enter the pathways that enable positive events (for example, beginning oil/gas conservation efforts now)
Then, ideally, you, your group, your country, your global society, make that pathway change instead of procrastinating on that change and stressing a lot of people out.
Conclusion
So, what I presented earlier was just really shortform quality work on what could be called modeling, but is not forecasting. It is much simpler.
It’s just a way to temporarily (only temporarily, because I know EA’s love their subjective probabilities) withdraw from assigning probabilities to novel future events, events without earlier similar events to inform you of their likelihood.
Start with just {A, not A} as your possible events
build that out to {A,B,C,D,...}
note which events are negative or positive
work on exiting or entering pathways toward events that you dislike or like, once you learn:
which pathways you are headed down
which pathways are ambiguous
what exit or entry points are present on various paths.
It is important to return to your forecasts, take the list of compared alternative events in those forecasts, and learn more about how to avoid unfavorable alternatives, all while ignoring their likelihood. Those likelihood numbers could make you less effective, either by relieving you or discouraging you when neither’s appropriate.
Policy-makers choose present and short-term interests repeatedly over longer-term concerns. In the case of climate change and environmental concerns in general, the last 50 years of policy have effectively doomed humanity to a difficult 21st century.
Not the prediction probabilities, those should be thrown out, for the simple reason that, even if they are accurate, when you’re talking about the extinction of humanity, for example, the difference between a 1% and a 10% chance is not relevant to planning against the possibility.
Like most people, you could wonder what the point is of pretending that the unlikely is very likely. Well, what is the likelihood, in your mind, of having poor quality information about the present situation?
Rather than you having to have such an experience yourself, for example,in the case of climate change, by helplessly watching a beloved family member perish in a heat wave, you just temporarily pretend that future extreme climatic events are actually very likely.
When you backtrack, you are going through your pathway in reverse, from the future to the present. You study how you got to where you ended up from where you are right now. You can collect this information from others. If they predict doom, how did it happen? From that, you learn about what steps not to take going forward.