Thanks for biting a bullet, I think I am making progress in understanding your view.
I also realized that part of my “feeling of inconsistency” comes from not having realized that the table in section 3.2 reports geometric mean of odds instead of the average, and where the average would be lower.
Lets say we have a 2-parameter Carlsmith model, where we estimate probabilities P(A) and P(B|A), in order to get to a final estimate of the probability P(A∩B). Lets say we have uncertainty over our probability estimates, and we estimate P(A) using a random variable X, and estimate P(B|A) using a random variable Y. To make the math easier, I am going to assume that X,Y are discrete (I can repeat it for a more general case, eg using densities if requested): We have k possible estimates ai for P(A), and pi:=P(X=ai) is the probability that X assigns the value ai for our estimate of P(A). Similarly, bi are estimates for P(B|A) that Y outputs with probability qi:=P(Y=bi). We also have ∑ki=1pi=∑ki=1qi=1.
Your view seems to be something like “To estimate P(A∩B), we should sample from X and Y, and then compute the geometric mean of odds for our final estimate.”
Sampling from X and Y, we get values ai⋅bi with probability pi⋅qi, and then taking the geometric mean of odds would result in the formula
P(A∩B)=k∏i=1k∏j=1(aibj1−aibj)piqj.
Whereas my view is “We should first collapse the probabilities by taking the mean, and then multiply”, that is we first calculate P(A)=∑ki=1aipi and P(B|A)=∑kj=1biqi, for a final formula of
P(A∩B)=(k∑i=1aipi)(k∑j=1bjqj).
And you are also saying ”P(A∩B)=P(A)⋅P(B|A) is still true, but the above naive estimates for P(A) and P(B) are not good, and should actually be different (and lower than typical survey respondents in the case of AI xrisk estimates).” (I can’t derive a precise formula from your comments or my skim of the article, but I don’t think thats a crucial issue.)
Do I characterize your view roughly right? (Not saying that is your whole view, just parts of it).
Thanks for biting a bullet, I think I am making progress in understanding your view.
I also realized that part of my “feeling of inconsistency” comes from not having realized that the table in section 3.2 reports geometric mean of odds instead of the average, and where the average would be lower.
Lets say we have a 2-parameter Carlsmith model, where we estimate probabilities P(A) and P(B|A), in order to get to a final estimate of the probability P(A∩B). Lets say we have uncertainty over our probability estimates, and we estimate P(A) using a random variable X, and estimate P(B|A) using a random variable Y. To make the math easier, I am going to assume that X,Y are discrete (I can repeat it for a more general case, eg using densities if requested): We have k possible estimates ai for P(A), and pi:=P(X=ai) is the probability that X assigns the value ai for our estimate of P(A). Similarly, bi are estimates for P(B|A) that Y outputs with probability qi:=P(Y=bi). We also have ∑ki=1pi=∑ki=1qi=1.
Your view seems to be something like “To estimate P(A∩B), we should sample from X and Y, and then compute the geometric mean of odds for our final estimate.”
Sampling from X and Y, we get values ai⋅bi with probability pi⋅qi, and then taking the geometric mean of odds would result in the formula
P(A∩B)=k∏i=1k∏j=1(aibj1−aibj)piqj.
Whereas my view is “We should first collapse the probabilities by taking the mean, and then multiply”, that is we first calculate P(A)=∑ki=1aipi and P(B|A)=∑kj=1biqi, for a final formula of
P(A∩B)=(k∑i=1aipi)(k∑j=1bjqj).
And you are also saying ”P(A∩B)=P(A)⋅P(B|A) is still true, but the above naive estimates for P(A) and P(B) are not good, and should actually be different (and lower than typical survey respondents in the case of AI xrisk estimates).” (I can’t derive a precise formula from your comments or my skim of the article, but I don’t think thats a crucial issue.)
Do I characterize your view roughly right? (Not saying that is your whole view, just parts of it).