5% probability by 2039 seems way too confident that it will take a long time: is this intended to be a calibrated estimate, or does the number have a different meaning?
Jaime gave a great thorough explanation. My catch-phrase version: This is not a holistic Bayesian prediction. The confidence intervals come from bootstrapping (re-sampling) a fixed dataset, not summing over all possible future trajectories for reality.
It is not intended to be a calibrated estimated, though we were hoping that it could help others make calibrated estimations.
The ways that a calibrated estimate would differ include:
1. The result is a confidence interval, not a credence interval (most places in the paper where it says probability is should say confidence, I apologize for the oversight), so your choice of prior can make a big difference to the associated credence interval.
2. The model is assuming that no discontinuous progress will happen, but we do not know whether this will hold. (Grace, 2020) estimates a yearly rate of discontinous breakthroughs on any given technology of 0.1%, so I’d naively expect a 1-(1-0.1%)^20 = 2% chance that there is such a discontinuous breakthrough for quantum computing in the next 20 years.
3. The model makes optimistic assumptions of progress—namely that a) the rate of exponential progress will hold for both the physical qubit count and the gate error rate, b) there is no correlation between the metrics in a system (which we show it is probably an optimistic assumption, since it is easier to optimize only one of the metrics than both) and c) we ignore the issue of qubit connectivity due to lack of data and modelling difficulty.
If I was pressed to put a credence bound on it, I’d assign about 95% chance that EITHER the model is basically correct OR that the timelines are slower than expected (most likely if the exponential trend of progress on gate error rate does not hold in the next 20 years), for an upper bound on the probability that we will have RSA 2048 quantum attacks by 2040 of <5% + 95% 5% ~= 10%.
Either case, I think that the model should make us puzzle over the expert timelines, and inquire whether they are taking into account any extra information or being too optimistic.
EDIT: I made an artihmetic mistake, now corrected (thanks to Eric Martin for pointing it out)
It appears to be the extrapolation using exponential growth from current capacity using maximum likelihood to fit the growth rate. Whether you believe the date comes down to how well you think their generalized logical Qubit measures what they’re trying to capture.
I think it’s worth remembering that asking experts for timelines requiring more than 10 years often results in guessing 10 years, so I would tend to favor a data-based extrapolation over that.
5% probability by 2039 seems way too confident that it will take a long time: is this intended to be a calibrated estimate, or does the number have a different meaning?
Jaime gave a great thorough explanation. My catch-phrase version: This is not a holistic Bayesian prediction. The confidence intervals come from bootstrapping (re-sampling) a fixed dataset, not summing over all possible future trajectories for reality.
It is not intended to be a calibrated estimated, though we were hoping that it could help others make calibrated estimations.
The ways that a calibrated estimate would differ include:
1. The result is a confidence interval, not a credence interval (most places in the paper where it says probability is should say confidence, I apologize for the oversight), so your choice of prior can make a big difference to the associated credence interval.
2. The model is assuming that no discontinuous progress will happen, but we do not know whether this will hold. (Grace, 2020) estimates a yearly rate of discontinous breakthroughs on any given technology of 0.1%, so I’d naively expect a 1-(1-0.1%)^20 = 2% chance that there is such a discontinuous breakthrough for quantum computing in the next 20 years.
3. The model makes optimistic assumptions of progress—namely that a) the rate of exponential progress will hold for both the physical qubit count and the gate error rate, b) there is no correlation between the metrics in a system (which we show it is probably an optimistic assumption, since it is easier to optimize only one of the metrics than both) and c) we ignore the issue of qubit connectivity due to lack of data and modelling difficulty.
If I was pressed to put a credence bound on it, I’d assign about 95% chance that EITHER the model is basically correct OR that the timelines are slower than expected (most likely if the exponential trend of progress on gate error rate does not hold in the next 20 years), for an upper bound on the probability that we will have RSA 2048 quantum attacks by 2040 of <5% + 95% 5% ~= 10%.
Either case, I think that the model should make us puzzle over the expert timelines, and inquire whether they are taking into account any extra information or being too optimistic.
EDIT: I made an artihmetic mistake, now corrected (thanks to Eric Martin for pointing it out)
Do you have a link for (Grace, 2020) ?
The citation is a link: (Grace, 2020)
Just in case: https://aiimpacts.org/discontinuous-progress-in-history-an-update/
It appears to be the extrapolation using exponential growth from current capacity using maximum likelihood to fit the growth rate. Whether you believe the date comes down to how well you think their generalized logical Qubit measures what they’re trying to capture.
I think it’s worth remembering that asking experts for timelines requiring more than 10 years often results in guessing 10 years, so I would tend to favor a data-based extrapolation over that.