Yes, but in doing so the uncertainty in both A and B matters, and showing that A is lower variance than B doesn’t show that E[benefits(A)] > E[benefits(B)]. Even if benefits(B) are highly uncertain and we know benefits(A) extremely precsiely, it can still be the case that benefits(B) are larger in expectation.
If you properly account for uncertainty, you should pick the certain cause over the uncertain one even if a naive EV calculation says otherwise, because you aren’t accounting for the selection process involved in picking the cause. I’m writing an explainer for this, but if I’m reading the optimisers curse paper right, a rule of thumb is that if cause A is 10 times more certain than cause B, cause B should be downweighted by a factor of 100 when comparing them.
I will caveat this by saying that in my opinion it makes sense for estimation purposes to discount or shrink estimates of highly uncertainty quantities, which I think many advocates of AI as a cause fail to do and can be fairly criticized for. But the issue is a quantitative one, and so can come out either way. I think there is a difference between saying that we should heavily shrink estimates related to AI due to their uncertainty and lower quality evidence, vs saying that they lack any evidence whatsoever.
I feel like my position is consistent with what you have said, I just view this as part of the estimation process. When I say “E[benefits(A)] > E[benefits(B)]” I am assuming these are your best all-inclusive estimates including regularization/discounting/shrinking of highly variable quantities. In fact I think its also fine to use things other than expected value or in general use approaches that are more robust to outliers/high-variance causes. As I say in the above quote, I also think it is a completely reasonable criticism of AI risk advocates that they fail to do this reasonably often.
If you properly account for uncertainty, you should pick the certain cause over the uncertain one even if a naive EV calculation says otherwise
This is sometimes correct, but the math could come out that the highly uncertain cause area is preferable after adjustment. Do you agree with this? That’s really the only point I’m trying to make!
I don’t think the difference here comes down to one side which is scientific and rigorous and loves truth against another that is bias and shoddy and just wants to sneak there policies through in an underhanded manner with no consideration for evidence or science. Analyzing these things is messy, and different people interpret evidence in different ways or weigh different factors differently. To me this is normal and expected.
I’d be very interested to read your explainer, it sounds like it addresses a valid concern with arguments for AI risk that I also share.
If you properly account for uncertainty, you should pick the certain cause over the uncertain one even if a naive EV calculation says otherwise, because you aren’t accounting for the selection process involved in picking the cause. I’m writing an explainer for this, but if I’m reading the optimisers curse paper right, a rule of thumb is that if cause A is 10 times more certain than cause B, cause B should be downweighted by a factor of 100 when comparing them.
In one of my comments above, I say this:
I feel like my position is consistent with what you have said, I just view this as part of the estimation process. When I say “E[benefits(A)] > E[benefits(B)]” I am assuming these are your best all-inclusive estimates including regularization/discounting/shrinking of highly variable quantities. In fact I think its also fine to use things other than expected value or in general use approaches that are more robust to outliers/high-variance causes. As I say in the above quote, I also think it is a completely reasonable criticism of AI risk advocates that they fail to do this reasonably often.
This is sometimes correct, but the math could come out that the highly uncertain cause area is preferable after adjustment. Do you agree with this? That’s really the only point I’m trying to make!
I don’t think the difference here comes down to one side which is scientific and rigorous and loves truth against another that is bias and shoddy and just wants to sneak there policies through in an underhanded manner with no consideration for evidence or science. Analyzing these things is messy, and different people interpret evidence in different ways or weigh different factors differently. To me this is normal and expected.
I’d be very interested to read your explainer, it sounds like it addresses a valid concern with arguments for AI risk that I also share.