I think many people in the EA community in fact have this view. Do you think those people should still prefer GHD because AI is off limits due to not being “scientific”? I would consider this to be “for style points”, and disagree with this approach.
It seems you have an issue with the word “scientific” and are constructing a straw-man argument around it. This has nothing to do with “style points”. As I have already explained, by scientific I only refer to high-quality studies that withstand scrutiny. If a study doesn’t, then it’s value as evidence is heavily discounted, as the probability of the conclusions of the study being right despite methodological errors, failures to replicate it, etc. is lower than if the study does not have these issues. If a study hasn’t been scrutinized at all, it is likely bad, because the amount of bad research is greater than the amount of good research (for example, if we look at the rejection rates of journals/conferences), and lack of scrutiny implies lack of credibility as researchers do not take the study seriously enough to scrutinize it.
The conclusion that cause A is preferable to cause B involves the uncertainty about both causes. Even if cause A has more rigorous evidence than cause B, that doesn’t mean the conclusion that benefits(A) > benefits(B) is similarly rigorous.
Yet E[benefits(A)] > E[benefits(B)] is a rigorous conclusion, because the uncertainty can be factored into the expected value.
Can I ask why? Do you think AI won’t be a “big deal” in the reasonably near future?
The International AI Safety Report lists many realistic threats (the first one of those is deepfakes, to give an example). Studying and regulating these things is nice, but they are not effective interventions in terms of lives saved etc.
I’m really at a loss here. If your argument is taken literally, I can convince you to fund anything, since I can give you highly uncertain arguments for almost everything. I cannot believe this is really your stance. You must agree with me that uncertainty affects decision making. It only seems that the word “scientific” bothers you for some reason, which I cannot really understand either. Do you believe that methodological errors are not important? That statistical significance is not required? That replicability does matter? To object to the idea that these issues cause uncertainty is absurd.
It seems you have an issue with the word “scientific” and are constructing a straw-man argument around it.
The “scientific” phrasing frustrates me because I feel like it is often used to suggest high rigor without actually demonstrating that such rigor actually applies to a give situation, and because I feel like it is used to exclude certain categories of evidence when those categories are relevant, even if they are less strong compared to other kinds of evidence. I think we should weigh all relevant evidence, not exclude cetain pieces because they aren’t scientific enough.
Yet E[benefits(A)] > E[benefits(B)] is a rigorous conclusion, because the uncertainty can be factored into the expected value.
Yes, but in doing so the uncertainty in both A and B matters, and showing that A is lower variance than B doesn’t show that E[benefits(A)] > E[benefits(B)]. Even if benefits(B) are highly uncertain and we know benefits(A) extremely precsiely, it can still be the case that benefits(B) are larger in expectation.
I cannot believe this is really your stance. You must agree with me that uncertainty affects decision making.
In my comment that you are responding to, I say:
The conclusion that cause A is preferable to cause B involves the uncertainty about both causes.
I also say:
I will caveat this by saying that in my opinion it makes sense for estimation purposes to discount or shrink estimates of highly uncertainty quantities
What about these statements makes you think that I don’t believe uncertainty affects decision making? It seems like I say that it does affect decision making in my comment.
If stock A very likely has a return in the range of 1-2%, and stock B very likely has a return in the range of 0-10%, do you think stock A must have a better expected return because it has lower uncertainty?
Yes uncertainty matters but it is more complicated than saying that the least uncertain option is always better. Sometimes the option that has less rigorous support is still better in an all-things-considered analysis.
If your argument is taken literally, I can convince you to fund anything, since I can give you highly uncertain arguments for almost everything.
I don’t think my argument leads to this conclusion. I’m just saying that AI risk has some evidence behind it, even if it isn’t the most rigorous evidence! That’s why I’m being such a stickler about this! If it were true that AI risk has actually zero evidence then of course I wouldn’t buy it! But I don’t think there actually is zero evidence even if AI risk advocates sometimes overestimate the strength of the evidence.
Yes, but in doing so the uncertainty in both A and B matters, and showing that A is lower variance than B doesn’t show that E[benefits(A)] > E[benefits(B)]. Even if benefits(B) are highly uncertain and we know benefits(A) extremely precsiely, it can still be the case that benefits(B) are larger in expectation.
If you properly account for uncertainty, you should pick the certain cause over the uncertain one even if a naive EV calculation says otherwise, because you aren’t accounting for the selection process involved in picking the cause. I’m writing an explainer for this, but if I’m reading the optimisers curse paper right, a rule of thumb is that if cause A is 10 times more certain than cause B, cause B should be downweighted by a factor of 100 when comparing them.
I will caveat this by saying that in my opinion it makes sense for estimation purposes to discount or shrink estimates of highly uncertainty quantities, which I think many advocates of AI as a cause fail to do and can be fairly criticized for. But the issue is a quantitative one, and so can come out either way. I think there is a difference between saying that we should heavily shrink estimates related to AI due to their uncertainty and lower quality evidence, vs saying that they lack any evidence whatsoever.
I feel like my position is consistent with what you have said, I just view this as part of the estimation process. When I say “E[benefits(A)] > E[benefits(B)]” I am assuming these are your best all-inclusive estimates including regularization/discounting/shrinking of highly variable quantities. In fact I think its also fine to use things other than expected value or in general use approaches that are more robust to outliers/high-variance causes. As I say in the above quote, I also think it is a completely reasonable criticism of AI risk advocates that they fail to do this reasonably often.
If you properly account for uncertainty, you should pick the certain cause over the uncertain one even if a naive EV calculation says otherwise
This is sometimes correct, but the math could come out that the highly uncertain cause area is preferable after adjustment. Do you agree with this? That’s really the only point I’m trying to make!
I don’t think the difference here comes down to one side which is scientific and rigorous and loves truth against another that is bias and shoddy and just wants to sneak there policies through in an underhanded manner with no consideration for evidence or science. Analyzing these things is messy, and different people interpret evidence in different ways or weigh different factors differently. To me this is normal and expected.
I’d be very interested to read your explainer, it sounds like it addresses a valid concern with arguments for AI risk that I also share.
The “scientific” phrasing frustrates me because I feel like it is often used to suggest high rigor without actually demonstrating that such rigor actually applies to a give situation, and because I feel like it is used to exclude certain categories of evidence when those categories are relevant, even if they are less strong compared to other kinds of evidence. I think we should weigh all relevant evidence, not exclude cetain pieces because they aren’t scientific enough.
Again, you are attacking me because of the word “scientific” instead of attacking my arguments. As I have many, many times said, studies should be weighted based on their content and the scrutiny it receives. To oppose the word “science” just because of the word itself is silly. Your idea that works are arbitrarily sorted to “scientific” and “non-scientific” based on “style points” instead of assessing their merits is just wrong and a straw-man argument.
I don’t think my argument leads to this conclusion. I’m just saying that AI risk has some evidence behind it, even if it isn’t the most rigorous evidence! That’s why I’m being such a stickler about this! If it were true that AI risk has actually zero evidence then of course I wouldn’t buy it! But I don’t think there actually is zero evidence even if AI risk advocates sometimes overestimate the strength of the evidence.
Where have I ever claimed that there is no evidence worth considering? In the start of my post, I write:
What unites many of these statements is the thorough lack of any evidence.
There are some studies that are rigorously conducted that provide some meager evidence. Not really enough to justify any EA intervention. But instead of referring to these studies, people use stuff like narrative arguments and ad-hoc models, which have approximately zero evidential value. That is the point of my post.
What about these statements makes you think that I don’t believe uncertainty affects decision making? It seems like I say that it does affect decision making in my comment.
If you believe this, I don’t understand where you disagree with me, other than you weird opposition to the word “scientific”.
Where have I ever claimed that there is no evidence worth considering?
In your OP, you write:
In this post, I’ve criticized non-evidence-based arguments, which hangs on the idea that evidence is something that is inherently required. Yet it has become commonplace to claim the opposite. One example of this argument is presented in the International AI Safety Report
You then quote the following:
Given sometimes rapid and unexpected advancements, policymakers will often have to weigh potential benefits and risks of imminent AI advancements without having a large body of scientific evidence available. In doing so, they face a dilemma. On the one hand, pre-emptive risk mitigation measures based on limited evidence might turn out to be ineffective or unnecessary. On the other hand, waiting for stronger evidence of impending risk could leave society unprepared or even make mitigation impossible – for instance if sudden leaps in AI capabilities, and their associated risks, occur.
Your summary of the quoted text is inaccurate. You claim that this is an arguement that evidence is not something that in inherently required, but the quote says no such thing. Instead, it references “a large body of scientific evidence” and “stronger evidence” vs “limited evidence”. This quote essential makes the same arguement I do above. How can we square the differences in these interpretations?
In response to me, you write:
In my post, I referred to the concept of “evidence-based policy making”. In this context, evidence refers specifically to rigorous, scientific evidence, as opposed to intuitions, unsubstantiated beliefs and anecdotes. Scientific evidence, as I said, referring to high-quality studies corroborated by other studies.
So, as used in your post, “evidence” means “rigorous, scientific evidence, as opposed to intuitions, unsubstantiated beliefs and anecdotes”. This is why I find your reference to “scientific evidence” frustrating. You draw a distinct between two categories of evidence and claim policy should be based on only one. I disagree, I think policy should be based on all available evidence, including intuition and anecdote (“unsubtantiated belief” obviously seems definitionally not evidence). I also think your argument relies heavily on contrasting with a hypothetical highly rigorous body of evidence that isn’t often achieved, which is why I have pointed out what I see as the “messiness” of lots of published scientific research.
The distinction you draw and how you defined “evidence” results in an equivocation. Your caracterization of the quote above only makes sense if you are claiming that AI risk can only claim to be “evidence-based” if is is backed by “high-quality studies that withstand scrutiny”. In other words, as I said in one of my comments:
It seems like the core of your argument is saying that there is a high burden of proof that hasn’t been met.
So, where do we disagreee? As I say immediately after:
I agree that arguments for short timelines haven’t met a high burden of proof but I don’t believe that there is such a burden.
I believe that we should compare E[benefits(AI)] with E[benefits(GHD)] and any other possible alternative cause areas, with no area having any specific burden of proof. The quality of the evidence plays out in taking those expectations. Different people may disagree on the results based on their interpretations of the evidence. People might weigh different sources of evidence differently. But there is no specific burden to have “high-quality studies that withstand scrutiny”, although this obviously weighs in favor of a cause that does have those studies. I don’t think having high quality studies amounts to “style points”. What I think would amount to “style points” is if someone concluded that E[benefits(AI)] > E[benefits(GHD)] but went with GHD anyway because they think AI is off limits due to the lack of “high-quality studies that withstand scrutiny” (i.e. if there is a burden of proof where “high-quality studies that withstand scrutiny” are required).
If you believe that evidence that does not withstand scrutiny (that is, evidence that does not meet basic quality standards, contains major methodological errors, is statistically insignificant, is based on fallacious reasoning, or any other reason why the evidence is scrutinized) is evidence that we should use, then you are advocating for pseudoscience. The expected value of benefits based on such evidence is near zero.
I’m sorry if criticizing pseudoscience is frustrating, but that kind of thinking has no place in rational decision-making.
Your summary of the quoted text is inaccurate. You claim that this is an arguement that evidence is not something that in inherently required, but the quote says no such thing. Instead, it references “a large body of scientific evidence” and “stronger evidence” vs “limited evidence”. This quote essential makes the same arguement I do above. How can we square the differences in these interpretations?
The quoted text implies that the evidence would not be sufficient under normal circumstances, hence the “evidence dilemma”. If the amount of evidence was sufficient, there would be no question about what is the correct action. While the text washes its hands from making the actual decision to rely on insufficient evidence, it clearly considers this as a serious possibility, which is not something that I believe anyone should advocate.
You are splitting hairs about the difference between “no evidence” and “limited evidence”. The report considers a multitude of different AI risks, some of which have more evidence and some of which have less. What is important is that they bring up the idea that policy should be made without proper evidence.
If you believe that evidence that does not withstand scrutiny (that is, evidence that does not meet basic quality standards, contains major methodological errors, is statistically insignificant, is based on fallacious reasoning, or any other reason why the evidence is scrutinized) is evidence that we should use, then you are advocating for pseudoscience. The expected value of benefits based on such evidence is near zero.
I don’t think evidence which is based on something other than “high-quality studies that withstand scrutiny” is pseudoscience. You could have moderate-quality studies that withstand scutiny, you could have preliminary studies which are suggestive but which haven’t been around long enough for scrutiny to percolate up. I don’t think these things have near zero evidential value.
This is my issue with your use of the term “scientific evidence” and related concepts. Its role in the argument is mostly rhetorical, having the effective of charcterizing other arguments or positions as not worthy of consideration without engaging with the messy question of what value various pieces of evidence actually have. It causes confusion and results in you equivocating about what counts as “evidence”.
My view, and where we seem to disagree, is that I think there are types of evidence other than “high-quality studies that withstand scrutiny” and pseudoscience. Look, I agree that if something has basically zero evidential value we can reasonably round that off to zero. But “limited evidence” isn’t the same as near-zero evidence. I think there is a catgory of evidence between pseudoscience/near-zero evidence and “high-quality studies that withstand scrutiny”. When we don’t have access to the highest quality evidence, it is acceptable in my view to make policy based on the best evidence that we have, including if it is in that imtermediate category. This is the same argument made in the quote from the report.
The quoted text implies that the evidence would not be sufficient under normal circumstances
This is exactly what I mean when I say this approach results in you equivocating. In your OP, you explicitly claim that this quote argues that evidence is not something that is needed. You clarify in your comments with me and in a clarification at the top of your post that only “high-quality studies that withstand scrutiny” really count as evidence as you use the term. The fact that you are using the word “evidence” in this way is causing you to misinterpret the quoted statement. The quote is saying that even if we don’t have the ideal, high-quality evidence that we would like and that might be need for us to be highly confident and establish a strong consensus that in situations of uncertainty it is acceptable to make policy based on more limited or moderate evidence. I share this view and think it is reasonable nad not pseudoscientific or somehow a claim that evidence of some kind isn’t required.
If the amount of evidence was sufficient, there would be no question about what is the correct action.
Uncetainty exists! You can be in a situation where the correct decision isn’t clear because the available information isn’t ideal. This is extremely common in real-world decision making. The entire point of this quote and my own comments is that when these situations arise the reasonable thing to do is to make the best possible decision with the information you have (which might involve trying to get more information) rather than declaring some policies off the table because they don’t have the highest quailty evidence supporting them. Making decisions under uncertainty means making decisions based on limited evidence sometimes.
Your argument is very similar to creationist and other pseudoscientific/conspiracy theory-style arguments.
A creationist might argue that the existence of life, humanity, and other complex phenomena is “evidence” for intelligent design. If we allow this to count as “limited” evidence (or whatever term we choose to use), it is possible to follow through a Pascal’s wager-style argument and posit that this “evidence”, even if it has high uncertainty, is enough to merit an action.
It is always possible to come up with “evidence” for any claim. In evidence-based decision making, we must set a bar for evidence. Otherwise, the word “evidence” would lose it’s meaning, and we’d be wasting our resources considering every piece of knowledge there exists as “evidence”.
You could have moderate-quality studies that withstand scutiny
If the studies withstand scrutiny, then they are high-quality studies. Of course, it is possible that the study has multiple conclusions, and some of them are undermined by scrutiny and some are not, or that there are errors that do not undermine the conclusions. These studies can of course be used as evidence. I used “high-quality” as the opposite of “low-quality”, and splitting hair about “moderate-quality” is uninteresting.
you could have preliminary studies which are suggestive but which haven’t been around long enough for scrutiny to percolate up
This is a good basis when, e.g., funding new research, as confirming and replicating recent studies is an important part of science. In this case, it doesn’t matter that much if the study’s conclusions end up being true or false, as confirming either way is valuable. Researching interesting things is good, and even bad studies are evidence that the topic is interesting. But they are not evidence that should be used for other kind of decision-making.
The fact that you are using the word “evidence” in this way is causing you to misinterpret the quoted statement.
You are again splitting a hair about the meanings of words. The important thing is that they are advocating for making decisions without sufficient evidence, which is something I oppose. Their report is long and contains many AI risks, some of which (like deepfakes) have high-quality studies behind them, while others (like X-risks) do not. As a whole, the report “has some evidence” that there are risks associated with AI. So they talk about “limited evidence”. What is important is that they imply this “limited evidence” is not sufficient for making decisions.
But “limited evidence” isn’t the same as near-zero evidence
Splitting a hair. You can call your evidence limited evidence if you want. It won’t get you a free pass that your argument should be considered. If it has too much uncertainty or doesn’t withstand scrutiny, it shouldn’t be taken in as evidence. Otherwise we end up in the creationist situation.
It seems you have an issue with the word “scientific” and are constructing a straw-man argument around it. This has nothing to do with “style points”. As I have already explained, by scientific I only refer to high-quality studies that withstand scrutiny. If a study doesn’t, then it’s value as evidence is heavily discounted, as the probability of the conclusions of the study being right despite methodological errors, failures to replicate it, etc. is lower than if the study does not have these issues. If a study hasn’t been scrutinized at all, it is likely bad, because the amount of bad research is greater than the amount of good research (for example, if we look at the rejection rates of journals/conferences), and lack of scrutiny implies lack of credibility as researchers do not take the study seriously enough to scrutinize it.
Yet E[benefits(A)] > E[benefits(B)] is a rigorous conclusion, because the uncertainty can be factored into the expected value.
The International AI Safety Report lists many realistic threats (the first one of those is deepfakes, to give an example). Studying and regulating these things is nice, but they are not effective interventions in terms of lives saved etc.
I’m really at a loss here. If your argument is taken literally, I can convince you to fund anything, since I can give you highly uncertain arguments for almost everything. I cannot believe this is really your stance. You must agree with me that uncertainty affects decision making. It only seems that the word “scientific” bothers you for some reason, which I cannot really understand either. Do you believe that methodological errors are not important? That statistical significance is not required? That replicability does matter? To object to the idea that these issues cause uncertainty is absurd.
The “scientific” phrasing frustrates me because I feel like it is often used to suggest high rigor without actually demonstrating that such rigor actually applies to a give situation, and because I feel like it is used to exclude certain categories of evidence when those categories are relevant, even if they are less strong compared to other kinds of evidence. I think we should weigh all relevant evidence, not exclude cetain pieces because they aren’t scientific enough.
Yes, but in doing so the uncertainty in both A and B matters, and showing that A is lower variance than B doesn’t show that E[benefits(A)] > E[benefits(B)]. Even if benefits(B) are highly uncertain and we know benefits(A) extremely precsiely, it can still be the case that benefits(B) are larger in expectation.
In my comment that you are responding to, I say:
I also say:
What about these statements makes you think that I don’t believe uncertainty affects decision making? It seems like I say that it does affect decision making in my comment.
If stock A very likely has a return in the range of 1-2%, and stock B very likely has a return in the range of 0-10%, do you think stock A must have a better expected return because it has lower uncertainty?
Yes uncertainty matters but it is more complicated than saying that the least uncertain option is always better. Sometimes the option that has less rigorous support is still better in an all-things-considered analysis.
I don’t think my argument leads to this conclusion. I’m just saying that AI risk has some evidence behind it, even if it isn’t the most rigorous evidence! That’s why I’m being such a stickler about this! If it were true that AI risk has actually zero evidence then of course I wouldn’t buy it! But I don’t think there actually is zero evidence even if AI risk advocates sometimes overestimate the strength of the evidence.
If you properly account for uncertainty, you should pick the certain cause over the uncertain one even if a naive EV calculation says otherwise, because you aren’t accounting for the selection process involved in picking the cause. I’m writing an explainer for this, but if I’m reading the optimisers curse paper right, a rule of thumb is that if cause A is 10 times more certain than cause B, cause B should be downweighted by a factor of 100 when comparing them.
In one of my comments above, I say this:
I feel like my position is consistent with what you have said, I just view this as part of the estimation process. When I say “E[benefits(A)] > E[benefits(B)]” I am assuming these are your best all-inclusive estimates including regularization/discounting/shrinking of highly variable quantities. In fact I think its also fine to use things other than expected value or in general use approaches that are more robust to outliers/high-variance causes. As I say in the above quote, I also think it is a completely reasonable criticism of AI risk advocates that they fail to do this reasonably often.
This is sometimes correct, but the math could come out that the highly uncertain cause area is preferable after adjustment. Do you agree with this? That’s really the only point I’m trying to make!
I don’t think the difference here comes down to one side which is scientific and rigorous and loves truth against another that is bias and shoddy and just wants to sneak there policies through in an underhanded manner with no consideration for evidence or science. Analyzing these things is messy, and different people interpret evidence in different ways or weigh different factors differently. To me this is normal and expected.
I’d be very interested to read your explainer, it sounds like it addresses a valid concern with arguments for AI risk that I also share.
Again, you are attacking me because of the word “scientific” instead of attacking my arguments. As I have many, many times said, studies should be weighted based on their content and the scrutiny it receives. To oppose the word “science” just because of the word itself is silly. Your idea that works are arbitrarily sorted to “scientific” and “non-scientific” based on “style points” instead of assessing their merits is just wrong and a straw-man argument.
Where have I ever claimed that there is no evidence worth considering? In the start of my post, I write:
There are some studies that are rigorously conducted that provide some meager evidence. Not really enough to justify any EA intervention. But instead of referring to these studies, people use stuff like narrative arguments and ad-hoc models, which have approximately zero evidential value. That is the point of my post.
If you believe this, I don’t understand where you disagree with me, other than you weird opposition to the word “scientific”.
In your OP, you write:
You then quote the following:
Your summary of the quoted text is inaccurate. You claim that this is an arguement that evidence is not something that in inherently required, but the quote says no such thing. Instead, it references “a large body of scientific evidence” and “stronger evidence” vs “limited evidence”. This quote essential makes the same arguement I do above. How can we square the differences in these interpretations?
In response to me, you write:
You also have added as a clarfication to your OP:
So, as used in your post, “evidence” means “rigorous, scientific evidence, as opposed to intuitions, unsubstantiated beliefs and anecdotes”. This is why I find your reference to “scientific evidence” frustrating. You draw a distinct between two categories of evidence and claim policy should be based on only one. I disagree, I think policy should be based on all available evidence, including intuition and anecdote (“unsubtantiated belief” obviously seems definitionally not evidence). I also think your argument relies heavily on contrasting with a hypothetical highly rigorous body of evidence that isn’t often achieved, which is why I have pointed out what I see as the “messiness” of lots of published scientific research.
The distinction you draw and how you defined “evidence” results in an equivocation. Your caracterization of the quote above only makes sense if you are claiming that AI risk can only claim to be “evidence-based” if is is backed by “high-quality studies that withstand scrutiny”. In other words, as I said in one of my comments:
So, where do we disagreee? As I say immediately after:
I believe that we should compare E[benefits(AI)] with E[benefits(GHD)] and any other possible alternative cause areas, with no area having any specific burden of proof. The quality of the evidence plays out in taking those expectations. Different people may disagree on the results based on their interpretations of the evidence. People might weigh different sources of evidence differently. But there is no specific burden to have “high-quality studies that withstand scrutiny”, although this obviously weighs in favor of a cause that does have those studies. I don’t think having high quality studies amounts to “style points”. What I think would amount to “style points” is if someone concluded that E[benefits(AI)] > E[benefits(GHD)] but went with GHD anyway because they think AI is off limits due to the lack of “high-quality studies that withstand scrutiny” (i.e. if there is a burden of proof where “high-quality studies that withstand scrutiny” are required).
If you believe that evidence that does not withstand scrutiny (that is, evidence that does not meet basic quality standards, contains major methodological errors, is statistically insignificant, is based on fallacious reasoning, or any other reason why the evidence is scrutinized) is evidence that we should use, then you are advocating for pseudoscience. The expected value of benefits based on such evidence is near zero.
I’m sorry if criticizing pseudoscience is frustrating, but that kind of thinking has no place in rational decision-making.
The quoted text implies that the evidence would not be sufficient under normal circumstances, hence the “evidence dilemma”. If the amount of evidence was sufficient, there would be no question about what is the correct action. While the text washes its hands from making the actual decision to rely on insufficient evidence, it clearly considers this as a serious possibility, which is not something that I believe anyone should advocate.
You are splitting hairs about the difference between “no evidence” and “limited evidence”. The report considers a multitude of different AI risks, some of which have more evidence and some of which have less. What is important is that they bring up the idea that policy should be made without proper evidence.
I don’t think evidence which is based on something other than “high-quality studies that withstand scrutiny” is pseudoscience. You could have moderate-quality studies that withstand scutiny, you could have preliminary studies which are suggestive but which haven’t been around long enough for scrutiny to percolate up. I don’t think these things have near zero evidential value.
This is my issue with your use of the term “scientific evidence” and related concepts. Its role in the argument is mostly rhetorical, having the effective of charcterizing other arguments or positions as not worthy of consideration without engaging with the messy question of what value various pieces of evidence actually have. It causes confusion and results in you equivocating about what counts as “evidence”.
My view, and where we seem to disagree, is that I think there are types of evidence other than “high-quality studies that withstand scrutiny” and pseudoscience. Look, I agree that if something has basically zero evidential value we can reasonably round that off to zero. But “limited evidence” isn’t the same as near-zero evidence. I think there is a catgory of evidence between pseudoscience/near-zero evidence and “high-quality studies that withstand scrutiny”. When we don’t have access to the highest quality evidence, it is acceptable in my view to make policy based on the best evidence that we have, including if it is in that imtermediate category. This is the same argument made in the quote from the report.
This is exactly what I mean when I say this approach results in you equivocating. In your OP, you explicitly claim that this quote argues that evidence is not something that is needed. You clarify in your comments with me and in a clarification at the top of your post that only “high-quality studies that withstand scrutiny” really count as evidence as you use the term. The fact that you are using the word “evidence” in this way is causing you to misinterpret the quoted statement. The quote is saying that even if we don’t have the ideal, high-quality evidence that we would like and that might be need for us to be highly confident and establish a strong consensus that in situations of uncertainty it is acceptable to make policy based on more limited or moderate evidence. I share this view and think it is reasonable nad not pseudoscientific or somehow a claim that evidence of some kind isn’t required.
Uncetainty exists! You can be in a situation where the correct decision isn’t clear because the available information isn’t ideal. This is extremely common in real-world decision making. The entire point of this quote and my own comments is that when these situations arise the reasonable thing to do is to make the best possible decision with the information you have (which might involve trying to get more information) rather than declaring some policies off the table because they don’t have the highest quailty evidence supporting them. Making decisions under uncertainty means making decisions based on limited evidence sometimes.
Your argument is very similar to creationist and other pseudoscientific/conspiracy theory-style arguments.
A creationist might argue that the existence of life, humanity, and other complex phenomena is “evidence” for intelligent design. If we allow this to count as “limited” evidence (or whatever term we choose to use), it is possible to follow through a Pascal’s wager-style argument and posit that this “evidence”, even if it has high uncertainty, is enough to merit an action.
It is always possible to come up with “evidence” for any claim. In evidence-based decision making, we must set a bar for evidence. Otherwise, the word “evidence” would lose it’s meaning, and we’d be wasting our resources considering every piece of knowledge there exists as “evidence”.
If the studies withstand scrutiny, then they are high-quality studies. Of course, it is possible that the study has multiple conclusions, and some of them are undermined by scrutiny and some are not, or that there are errors that do not undermine the conclusions. These studies can of course be used as evidence. I used “high-quality” as the opposite of “low-quality”, and splitting hair about “moderate-quality” is uninteresting.
This is a good basis when, e.g., funding new research, as confirming and replicating recent studies is an important part of science. In this case, it doesn’t matter that much if the study’s conclusions end up being true or false, as confirming either way is valuable. Researching interesting things is good, and even bad studies are evidence that the topic is interesting. But they are not evidence that should be used for other kind of decision-making.
You are again splitting a hair about the meanings of words. The important thing is that they are advocating for making decisions without sufficient evidence, which is something I oppose. Their report is long and contains many AI risks, some of which (like deepfakes) have high-quality studies behind them, while others (like X-risks) do not. As a whole, the report “has some evidence” that there are risks associated with AI. So they talk about “limited evidence”. What is important is that they imply this “limited evidence” is not sufficient for making decisions.
Splitting a hair. You can call your evidence limited evidence if you want. It won’t get you a free pass that your argument should be considered. If it has too much uncertainty or doesn’t withstand scrutiny, it shouldn’t be taken in as evidence. Otherwise we end up in the creationist situation.