Has there been any follow up work by you or others to refine your risk estimates, in particular to estimate the change to hazard rate?
So for example, you consider covering Yellowstone with 25 cm of unconsolidated material as a way to delay the next eruption and give us time to develop technology for a more permanent solution over the next, say, 50 or 100 years. You estimate that intervention increases the expected value (EV) of the time to the next eruption by 100 years. So that’s great, but I think what we really care about is something more like the hazard rate over the near term: what is the probability of preventing an eruption over next 50 or 100 years ? If the rate at which the pressure in the magma chamber increases is roughly constant, this distinction doesn’t really matter and a 100 year increase in EV means an eruption in the next 50 years is much less likely. But if it’s very far from uniform, the 100 year increase in EV might not be as great as it sounds. So e.g. say the process is driven by large jumps in pressure on a timescale of every 1000 years or so, then increasing the EV by 100 years is only decreasing the hazard rate by 10%: an eruption in the near term is still 90% as likely after the intervention as before.
Another consideration is are the dynamics any different between intervening at a random time vs. intervening when there are signs an eruption may be soon (but still enough time to complete the intervention)?
Thanks! Those are good questions. I have not put any more effort into it because resilient foods are likely lower cost to prepare for and protect against multiple catastrophes including super-volcanic eruptions. However, if we can get a few hundred million dollars for resilient foods, maybe working on preventing super-volcanic eruptions will be next on the list…
Your food resilience work is great: fascinating and really important! Indeed, I first heard of your supervolcano paper via your interview with Rob Wiblin which was primarily about feeding humanity after a catastrophe. In the grand scheme of things, that’s rightly higher priority, but the supervolcano stuff also caught my interest.
I happen to know a couple of volcanologists, so I asked them about your paper. They weren’t familiar with it, but independently stressed that something quite tractable that would benefit from more resources is better monitoring of volcanoes and prediction of eruptions.
The typical application of forecasting eruptions is evacuation. But that’s sociologically tricky when you inevitably have probabilities far from 1 and uncertain timelines, since an evacuation that ends up appearing unnecessary will lead to low compliance later (the volcanologists “cried wolf”). With interventions to prevent an eruption, that’s much less of an issue. Say you had a forecast that a certain supervolcano has a probability of 20% of erupting in the next century, so many orders of magnitude above base rate. That’s still realistically pretty useless from the point of view of evacuation, but would make your kind of interventions very attractive (if they work in that case).
So if it could shown that these interventions are likely tractable even when a potential near term eruption has been detected, then that would justify increased investment both in detection/forecasting and developing these approaches.
That was a really interesting paper!
Has there been any follow up work by you or others to refine your risk estimates, in particular to estimate the change to hazard rate?
So for example, you consider covering Yellowstone with 25 cm of unconsolidated material as a way to delay the next eruption and give us time to develop technology for a more permanent solution over the next, say, 50 or 100 years. You estimate that intervention increases the expected value (EV) of the time to the next eruption by 100 years. So that’s great, but I think what we really care about is something more like the hazard rate over the near term: what is the probability of preventing an eruption over next 50 or 100 years ? If the rate at which the pressure in the magma chamber increases is roughly constant, this distinction doesn’t really matter and a 100 year increase in EV means an eruption in the next 50 years is much less likely. But if it’s very far from uniform, the 100 year increase in EV might not be as great as it sounds. So e.g. say the process is driven by large jumps in pressure on a timescale of every 1000 years or so, then increasing the EV by 100 years is only decreasing the hazard rate by 10%: an eruption in the near term is still 90% as likely after the intervention as before.
Another consideration is are the dynamics any different between intervening at a random time vs. intervening when there are signs an eruption may be soon (but still enough time to complete the intervention)?
Thanks! Those are good questions. I have not put any more effort into it because resilient foods are likely lower cost to prepare for and protect against multiple catastrophes including super-volcanic eruptions. However, if we can get a few hundred million dollars for resilient foods, maybe working on preventing super-volcanic eruptions will be next on the list…
Your food resilience work is great: fascinating and really important! Indeed, I first heard of your supervolcano paper via your interview with Rob Wiblin which was primarily about feeding humanity after a catastrophe. In the grand scheme of things, that’s rightly higher priority, but the supervolcano stuff also caught my interest.
I happen to know a couple of volcanologists, so I asked them about your paper. They weren’t familiar with it, but independently stressed that something quite tractable that would benefit from more resources is better monitoring of volcanoes and prediction of eruptions.
The typical application of forecasting eruptions is evacuation. But that’s sociologically tricky when you inevitably have probabilities far from 1 and uncertain timelines, since an evacuation that ends up appearing unnecessary will lead to low compliance later (the volcanologists “cried wolf”). With interventions to prevent an eruption, that’s much less of an issue. Say you had a forecast that a certain supervolcano has a probability of 20% of erupting in the next century, so many orders of magnitude above base rate. That’s still realistically pretty useless from the point of view of evacuation, but would make your kind of interventions very attractive (if they work in that case).
So if it could shown that these interventions are likely tractable even when a potential near term eruption has been detected, then that would justify increased investment both in detection/forecasting and developing these approaches.