Generally, the scientific community is not going around arguing that drastic measures should be taken based on singular novel studies. Mainly, what a single novel study will produce is a wave of new studies on the same subject, to ensure that the results are valid and that the assumptions used hold up to scrutiny. Hence why that low-temperature superconductor was so quickly debunked.
I agree that on average the scientific community does a great job of this, but I think the process is much much messier in practice than a general description of the process makes it seem. For example, you have the alzheimers research that got huge pick-up and massive funding by major scientific institutions where the original research included doctored images. You have power-posing getting viral attention in science-ajacent media. You have priming where Kahneman wrote in his book that even if it seems wild you have to believe in it largely for similar reasons to what is being suggested here I think, that multiple rigorous scientific studies demonstrate the phenomenon, and yet when the replication crisis came around priming looks a lot more shaky than it seemed when Kahneman wrote that.
None of this means that we should throw out the existing scientific community or declare that most published research is false (although ironically there is a peer reviewed publication with this title!). Instead, my argument is that we should understand that this process is often messy and complicated. Imperfect research still has value and in my view is still “evidence” even if it is imperfect.
The research and arguments around AI risk are not anywhere near as rigorous as a lot of scientific research (and I linked a comment above where I myself criticize AI risk advocates for overestimating the rigor of their arguments). At the same time, this doesn’t mean that these arguments do not contain any evidence or value. There is a huge amount of uncetainty about what will happen with AI. People worried about the risks from AI are trying to muddle through these issues, just like the scientific community has to muddle through figuring things out as well. I think it its completely valid to point of flaws in arguments, lack of rigor, or over confidence (as I have also done). But evidence or argument doesn’t have to appear in a journal or conference to count as “evidence”.
My view is that we have to live with the uncertainty and make decisions based on the information we have, while also trying to get better information. Doing nothing and going with the status quo is itself a decision that can have important consequences. We should use the best evidence we have to make the best decision given uncertainty, not just default to the status quo when we lack ideal, rigorous evidence.
I agree that on average the scientific community does a great job of this, but I think the process is much much messier in practice than a general description of the process makes it seem. For example, you have the alzheimers research that got huge pick-up and massive funding by major scientific institutions where the original research included doctored images. You have power-posing getting viral attention in science-ajacent media. You have priming where Kahneman wrote in his book that even if it seems wild you have to believe in it largely for similar reasons to what is being suggested here I think, that multiple rigorous scientific studies demonstrate the phenomenon, and yet when the replication crisis came around priming looks a lot more shaky than it seemed when Kahneman wrote that.
None of this means that we should throw out the existing scientific community or declare that most published research is false (although ironically there is a peer reviewed publication with this title!). Instead, my argument is that we should understand that this process is often messy and complicated. Imperfect research still has value and in my view is still “evidence” even if it is imperfect.
The research and arguments around AI risk are not anywhere near as rigorous as a lot of scientific research (and I linked a comment above where I myself criticize AI risk advocates for overestimating the rigor of their arguments). At the same time, this doesn’t mean that these arguments do not contain any evidence or value. There is a huge amount of uncetainty about what will happen with AI. People worried about the risks from AI are trying to muddle through these issues, just like the scientific community has to muddle through figuring things out as well. I think it its completely valid to point of flaws in arguments, lack of rigor, or over confidence (as I have also done). But evidence or argument doesn’t have to appear in a journal or conference to count as “evidence”.
My view is that we have to live with the uncertainty and make decisions based on the information we have, while also trying to get better information. Doing nothing and going with the status quo is itself a decision that can have important consequences. We should use the best evidence we have to make the best decision given uncertainty, not just default to the status quo when we lack ideal, rigorous evidence.