Your argument is very similar to creationist and other pseudoscientific/conspiracy theory-style arguments.
A creationist might argue that the existence of life, humanity, and other complex phenomena is “evidence” for intelligent design. If we allow this to count as “limited” evidence (or whatever term we choose to use), it is possible to follow through a Pascal’s wager-style argument and posit that this “evidence”, even if it has high uncertainty, is enough to merit an action.
It is always possible to come up with “evidence” for any claim. In evidence-based decision making, we must set a bar for evidence. Otherwise, the word “evidence” would lose it’s meaning, and we’d be wasting our resources considering every piece of knowledge there exists as “evidence”.
You could have moderate-quality studies that withstand scutiny
If the studies withstand scrutiny, then they are high-quality studies. Of course, it is possible that the study has multiple conclusions, and some of them are undermined by scrutiny and some are not, or that there are errors that do not undermine the conclusions. These studies can of course be used as evidence. I used “high-quality” as the opposite of “low-quality”, and splitting hair about “moderate-quality” is uninteresting.
you could have preliminary studies which are suggestive but which haven’t been around long enough for scrutiny to percolate up
This is a good basis when, e.g., funding new research, as confirming and replicating recent studies is an important part of science. In this case, it doesn’t matter that much if the study’s conclusions end up being true or false, as confirming either way is valuable. Researching interesting things is good, and even bad studies are evidence that the topic is interesting. But they are not evidence that should be used for other kind of decision-making.
The fact that you are using the word “evidence” in this way is causing you to misinterpret the quoted statement.
You are again splitting a hair about the meanings of words. The important thing is that they are advocating for making decisions without sufficient evidence, which is something I oppose. Their report is long and contains many AI risks, some of which (like deepfakes) have high-quality studies behind them, while others (like X-risks) do not. As a whole, the report “has some evidence” that there are risks associated with AI. So they talk about “limited evidence”. What is important is that they imply this “limited evidence” is not sufficient for making decisions.
But “limited evidence” isn’t the same as near-zero evidence
Splitting a hair. You can call your evidence limited evidence if you want. It won’t get you a free pass that your argument should be considered. If it has too much uncertainty or doesn’t withstand scrutiny, it shouldn’t be taken in as evidence. Otherwise we end up in the creationist situation.
I don’t really see why anyone would use AI other than maybe translate phrases if one does not know English (or the language they are writing) well.
Language models add unnecessary fluff to the text that takes space away from actual content. I’d ask the following question: is the text written by the AI longer than the prompt? If the answer is yes, then please do not use that AI text. All extra text written by the model is fluff and meaningless.
Language models are also really bad at writing. They tend to overuse stylistic devices to the point that the text is heavy and difficult to read. They split the text into unnatural chapters, tend to add more subheadings than necessary, overuse tables and lists, and in general write horribly. Some people claim that AI helps people who are bad at writing to be able to take part in discussions, and that argument really makes no sense since AI cannot write either and I’d rather read bad English than fluff. The only exception to this is using AI for translation, since that is obviously a real blocker.
Lastly, there is the question of effort. To put an effort into writing is a proof that someone cares about this enough to write it down, so I should also care about reading it. If I recognize that a text is AI written, my motivation to go through it drops immediately. In a world of constant spam and sensory – and literary – overload, my time is valuable, and effort is the currency that gets me to read your text.