About the first 3 statements on existential risks, takeoff scenarios and how influential our time is: How much is your view the general wisdom of experts in the corresponding research fields (I’m not sure what this field would be for assessing our influence on the future) and how much is it something like your own internal view?
It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail—others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view. If I were wrong about that I’d change my view. One relevant piece of evidence is that the Metaculus (a community prediction site) algorithm puts the chance of 95%+ of people dead by 2100 at 0.5%, which is in the same ballpark as me.
I think there’s some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks.
One example I can point to is that for this question on climate change and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general.
Now you might think that actual superforecasters are better, but based on the comments given so far for COVID-19, I’m unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China’s low deaths as evidence that this can be easily replicated in other countries as the default scenario).
Now COVID-19 is not an existential risk or GCR, but it is an “out of distribution” problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.
Thanks for these interesting points!
About the first 3 statements on existential risks, takeoff scenarios and how influential our time is: How much is your view the general wisdom of experts in the corresponding research fields (I’m not sure what this field would be for assessing our influence on the future) and how much is it something like your own internal view?
It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail—others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view. If I were wrong about that I’d change my view. One relevant piece of evidence is that the Metaculus (a community prediction site) algorithm puts the chance of 95%+ of people dead by 2100 at 0.5%, which is in the same ballpark as me.
I think there’s some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks.
One example I can point to is that for this question on climate change and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general.
Now you might think that actual superforecasters are better, but based on the comments given so far for COVID-19, I’m unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China’s low deaths as evidence that this can be easily replicated in other countries as the default scenario).
Now COVID-19 is not an existential risk or GCR, but it is an “out of distribution” problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.