do we actually have better-than-order-of-magnitude knowledge about all of these parameters except Containment?)
Sorta kinda, yes? For example, convincingly arguing that any conditional probability in Carlsmith decomposition is less than 10% (while not inflating others) would probably win the main prize given that “I [Nick Beckstead] am pretty sympathetic to the analysis of Joe Carlsmith here.” + Nick is x3 higher than Carlsmith at the time of writing the report.
My understanding of what everyone is producing (Carlsmith, Beckstead etc) is their point estimate / most likely probability for some proposition being true. Shifting this point estimate to below 10% would be near enough a prize, but plenty of real-world applications have highish point estimates with a lower bound uncertainty that is very low.
The application where I am most familiar with this effect is clinical trials for oncology drugs; it isn’t uncommon for the point estimate for a drug’s effectiveness to be (say) 50% better than all other drugs on the market, but with a 95% confidence interval that covers no better at all, or even sometimes substantially worse. It seems to me to be quite a radical claim that we have better knowledge of AI Risk across nearly all parameters than we have of an oncology drug across a single parameter following a clinical trial.
Sorta kinda, yes? For example, convincingly arguing that any conditional probability in Carlsmith decomposition is less than 10% (while not inflating others) would probably win the main prize given that “I [Nick Beckstead] am pretty sympathetic to the analysis of Joe Carlsmith here.” + Nick is x3 higher than Carlsmith at the time of writing the report.
My understanding of what everyone is producing (Carlsmith, Beckstead etc) is their point estimate / most likely probability for some proposition being true. Shifting this point estimate to below 10% would be near enough a prize, but plenty of real-world applications have highish point estimates with a lower bound uncertainty that is very low.
The application where I am most familiar with this effect is clinical trials for oncology drugs; it isn’t uncommon for the point estimate for a drug’s effectiveness to be (say) 50% better than all other drugs on the market, but with a 95% confidence interval that covers no better at all, or even sometimes substantially worse. It seems to me to be quite a radical claim that we have better knowledge of AI Risk across nearly all parameters than we have of an oncology drug across a single parameter following a clinical trial.