If you believe that evidence that does not withstand scrutiny (that is, evidence that does not meet basic quality standards, contains major methodological errors, is statistically insignificant, is based on fallacious reasoning, or any other reason why the evidence is scrutinized) is evidence that we should use, then you are advocating for pseudoscience. The expected value of benefits based on such evidence is near zero.
I don’t think evidence which is based on something other than “high-quality studies that withstand scrutiny” is pseudoscience. You could have moderate-quality studies that withstand scutiny, you could have preliminary studies which are suggestive but which haven’t been around long enough for scrutiny to percolate up. I don’t think these things have near zero evidential value.
This is my issue with your use of the term “scientific evidence” and related concepts. Its role in the argument is mostly rhetorical, having the effective of charcterizing other arguments or positions as not worthy of consideration without engaging with the messy question of what value various pieces of evidence actually have. It causes confusion and results in you equivocating about what counts as “evidence”.
My view, and where we seem to disagree, is that I think there are types of evidence other than “high-quality studies that withstand scrutiny” and pseudoscience. Look, I agree that if something has basically zero evidential value we can reasonably round that off to zero. But “limited evidence” isn’t the same as near-zero evidence. I think there is a catgory of evidence between pseudoscience/near-zero evidence and “high-quality studies that withstand scrutiny”. When we don’t have access to the highest quality evidence, it is acceptable in my view to make policy based on the best evidence that we have, including if it is in that imtermediate category. This is the same argument made in the quote from the report.
The quoted text implies that the evidence would not be sufficient under normal circumstances
This is exactly what I mean when I say this approach results in you equivocating. In your OP, you explicitly claim that this quote argues that evidence is not something that is needed. You clarify in your comments with me and in a clarification at the top of your post that only “high-quality studies that withstand scrutiny” really count as evidence as you use the term. The fact that you are using the word “evidence” in this way is causing you to misinterpret the quoted statement. The quote is saying that even if we don’t have the ideal, high-quality evidence that we would like and that might be need for us to be highly confident and establish a strong consensus that in situations of uncertainty it is acceptable to make policy based on more limited or moderate evidence. I share this view and think it is reasonable nad not pseudoscientific or somehow a claim that evidence of some kind isn’t required.
If the amount of evidence was sufficient, there would be no question about what is the correct action.
Uncetainty exists! You can be in a situation where the correct decision isn’t clear because the available information isn’t ideal. This is extremely common in real-world decision making. The entire point of this quote and my own comments is that when these situations arise the reasonable thing to do is to make the best possible decision with the information you have (which might involve trying to get more information) rather than declaring some policies off the table because they don’t have the highest quailty evidence supporting them. Making decisions under uncertainty means making decisions based on limited evidence sometimes.
In one of my comments above, I say this:
I feel like my position is consistent with what you have said, I just view this as part of the estimation process. When I say “E[benefits(A)] > E[benefits(B)]” I am assuming these are your best all-inclusive estimates including regularization/discounting/shrinking of highly variable quantities. In fact I think its also fine to use things other than expected value or in general use approaches that are more robust to outliers/high-variance causes. As I say in the above quote, I also think it is a completely reasonable criticism of AI risk advocates that they fail to do this reasonably often.
This is sometimes correct, but the math could come out that the highly uncertain cause area is preferable after adjustment. Do you agree with this? That’s really the only point I’m trying to make!
I don’t think the difference here comes down to one side which is scientific and rigorous and loves truth against another that is bias and shoddy and just wants to sneak there policies through in an underhanded manner with no consideration for evidence or science. Analyzing these things is messy, and different people interpret evidence in different ways or weigh different factors differently. To me this is normal and expected.
I’d be very interested to read your explainer, it sounds like it addresses a valid concern with arguments for AI risk that I also share.